date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,429,872,458,000
i was working on writing a wrapper script for a tool. the wrapper script should prepare the environment and then call the tool in background then exit. it looks something like this: #!/bin/bash export FOO=1 tool "$@" & nothing spectacular here. except i erroneously did not call the tool (/usr/bin/tool) and instead effectively called the wrapper script itself (~/bin/tool). unknowingly i ran the wrapper script and nothing happened. at least nothing visible. i wondered why nothing happened. open the script again. see the error (not calling /usr/bin/tool). fix it to run the actual tool and saved. suddenly the tool popped up. after some pondering i recognized that the wrapper script was running itself in background over and over again. and when i edited the script to call the actual tool the next invocation of the script immediately called the tool and stopped the chain. since it does not fork it is no fork bomb. but it seems to be a recursive ghost. i recreated the silent recursive wrapper and tried to detect. my knowledge is limited. i am no guru. i tried tools like ps and pgrep. i gave the script a catchy name to pgrep. but pgrep sees nothing most of the time. running pgrep in a loop will sometimes catch the runaway script. about once every ten seconds? i did not do statistics. the recursive ghost script is benign. barely eats any resources. perhaps the only sort of visible effect is that it causes the PID to raise quickly. because every new invocation gets a new PID. how to detect and stop a runaway recursive ghost scripts which call itself in background and exits? here is how to reproduce: create script called woop with following content #!/bin/sh ./woop & open two terminals and have the script open in a text editor in one terminal run ./woop. notice nothing visible happens. it will return to prompt immediately. in second terminal run while true; do pgrep -fa woop; done. see one result every ten seconds or so. in text editor change line ./woop & to ./woops & or something and save. notice error in first terminal (./woop: line 2: ./woops: No such file or directory) notice in second terminal does not find any result anymore.
Using forkstat if available should give a good indication if any process is running amok. E.g: forkstat -e fork When identified do either or all of: chmod -x /path/to/file mv ... rm ... Optionally use the -S flag (here simplified stats): $ forkstat -S ... loads of lines ^C Fork Exec Exit ... Total Process 11546 11532 11547 ... 34625 /bin/bash - ./woop ... Related: How to track newly created processes in Linux? How does Linux determine the next PID? https://github.com/ColinIanKing/forkstat
how to detect and stop a script which calls itself in background and exits
1,429,872,458,000
I have a fairly big application under care. As part of its job it spawns some child processes and needs to monitor their state (running, crashed). Child process deaths were detected by setting signal handler for SIGCHLD using signal(2). Some time ago I migrated it to signalfd(2). What I did was simple: removed the signal handler for SIGCHLD blocked SIGCHLD and created a signalfd(2) to capture SIGCHLD My problem is that the file descriptor I created does not seem to capture SIGCHLD. However, if I ignore the return value of read(2) call on that descriptor and call waitpid(-1, &status, WNOHANG) I can obtain the information about exited child processes. So it looks like the notification is delivered but my signalfd(2) descriptor just ignores it. I made sure to have exactly one place in the program where read(2) is called on the signalfd(2) descriptor, exactly one place where waitpid(2) is called, and exactly one place were the signal handling is set up. The setup code looks like this: sigset_t mask; sigemptyset(&mask); sigaddset(&mask, SIGCHLD); sigprocmask(SIG_BLOCK, &mask, nullptr); int signal_fd = signalfd(-1, &mask, SFD_NONBLOCK | SFD_CLOEXEC); if (signal_fd == -1) { /* log failure and exit */ } else { /* log success */ } The reading code looks like this: signalfd_siginfo info; memset(&info, 0, sizeof(info)); if (read(signal_fd, &info, sizeof(info)) == -1) { /* * Log failure and return. * The file descriptor *always* returns EAGAIN, even in * presence of dead child processes. */ return; } if (info.ssi_signo == SIGCHLD) { int status = 0; int child = waitpid(-1, &status, WNOHANG); /* * Process result of waitpid(2). The call is successful even if * the read of signalfd above returned an error. */ } What am I doing wrong? Edit: The problem is that read(2) fails with EAGAIN even if there are dead child processes ready to be waitpid(2)-ed, which means that a SIGCHLD must have been delivered to my master process. I know that read(2) may return EAGAIN for non-blocking file descriptors and the code accounts for that.
When migrating from signal handling based on signal(2) or sigaction(2) to signalfd(2) you change the way you receive signals. The old way leaves the signals unblocked, the new one needs them blocked. If you have some regions of code in which you do not want to be disturbed by signals you need to block them: sigset_t mask; sigemptyset(&mask); sigaddset(&mask, SIGFOO); pthread_sigmask(SIG_BLOCK, &mask, nullptr); { /* not-to-be-disturbed code here */ } This requires that you later unblock them, because otherwise signal(2) or sigaction(2) will not be able to pick them up. { /* not-to-be-disturbed code here */ } pthread_sigmask(SIG_UNBLOCK, &mask, nullptr); However, for signalfd(2) the signals must stay blocked. If you have a long neglected code path that you rarely look at and it follows the old way ie, blocks and unblocks some signals, it may trash your signal-reading file descriptor you got from signalfd(2). TL;DR Vet your code for any calls to signal(2), sigaction(2), pthread_sigmask(2) etc. when migrating to signalfd(2) to check if you did not forget about some codepath that messes with the signal mask. (Two and a half years later may be a bit late, but maybe the answer will help someone.)
File descriptor from `signalfd(2)` is never ready to read
1,429,872,458,000
How can I prevent starting processes automatically. (e.g. I used mysql database about a year ago). I killed the mysqld processs and now I see it's running again. And many other processes I don't know what are they good for. I'd like to have more control over the processes that are running, what I can turn off, what will start periodically etc. $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 12.04.2 LTS Release: 12.04 Codename: precise
You can create an override for the service. Search for the service in /etc/init/. I have no mysql so I dont't know the exakt name. I take pulseaudio for example. ls -al /etc/init/ | grep pulse -rw-r--r-- 1 root root 1890 Apr 4 2014 pulseaudio.conf Then create the override-file for your Service sudo touch /etc/init/pulseaudio.override Then we will set the Service to manual. echo "manual" | sudo tee -a /etc/init/pulseaudio.override If you need the service you can start him manually with sudo service pulseaudio start If you need your Service at boot again remove the createt file. Could not better explain, my english ist very bad.
Prevent starting processes automatically
1,429,872,458,000
I have a few processes that spring up and I am able to print a line of the pgid's that I would like to enter into a kill command. here is what I have: sudo ps o pgid,args | grep mininet: | sudo awk '{print -$0}' returns something like -3834 -3841 -3844 -3846 -3848 -3853 -3856 -3859 -3862 I negated the output in the {print -$0} part so that they kill the children processes too. the grep command searches for an argument in bash commands that denote the parent programs now I would like to call sudo kill -SIGSTOP but I see here http://www.chemie.fu-berlin.de/chemnet/use/info/gawk/gawk_9.html that you can't use commands inside the awk other than conditionals, print,etc.. Am I mistaken on this or is there a way to redirect the input to the kill command to stop the processes. context: pausing the mininet network emulator. I'd like to do this as a one-liner because it would be cool. Im sort of confused on how priority is given with | and how to input one command into the other. Coding by the unix philosophy I shouldnt worry about bottlenecks until later but if someone thinks that this is a bad way to do this I would appreciate that info too. Thanks~ edit: This command stops the processes: sudo ps o pgid,args | grep mininet: | sudo awk '{system("sudo kill --signal SIGSTOP -"$1)}' - In awk you can use system("program ") taking the advice this works to: sudo pgrep -f mininet: | sudo awk '{system("sudo kill --signal SIGCONT -"$1)}' -
This command stops the processes: sudo ps o pgid,args | grep mininet: | sudo awk '{system("sudo kill --signal SIGSTOP -"$1)}' - In awk you can use system("program ") taking the advice to use pgrep this works too: sudo pgrep -f mininet: | sudo awk '{system("sudo kill --signal SIGCONT -"$1)}' -
kill processes in one line with kill, awk, ps, and grep
1,429,872,458,000
I have a Python script that does more or less this current_tasks = TaskManager() MAXPROCS = 8 while len(outstanding_tasks) > 0: if len(current_tasks.running) < MAXPROCS: current_tasks.addTask(outstanding_tasks.next()) else: current_tasks.wait_for_one_finish() and outstanding_tasks.next() is basically this: p = subprocess.Popen([task], stdout=OUTFILE, stderr=subprocess.PIPE) and current_tasks.wait_for_one_finish(): waiting = True while waiting: for t in tasks: ret = t.poll() if ret not None: handle_stderr(t) waiting = False break Fairly straightforward - spawn tasks on-demand until we're running 8 of them at a time, and then block until they finish one at a time before spawning more tasks. The problem is this: stderr=subprocess.PIPE Each subprocess is writing is stderr to a pipe. If it crashes and it wants to write a big log message or whatever to the pipe, and that message exceeds the size of the pipe buffer, the write() will block. The process won't finish, so my controlling process will never see a return value from poll() and go read from its stderr. There are obviously ways around this: redirect stderr from my subprocesses to temporary files spawn a Python thread that reads from the stderr file descriptors of all running tasks and buffers them in memory have a select() or something in my little ad hoc event loop But all of that is stuff I have to handle in my application code. What I want to know is: is there some way to get the behaviour of a pipe, but with a nice big elastic buffer, so that the subprocesses can always do a successful write() to their stderr and then exit, without me having to look at it until they're done?
The short answer is: there isn't. You've already highlighted the workarounds necessary to deal with large data being sent through a subprocess pipe. The "nice big elastic buffer" pipe doesn't exist. This is called out in the Subprocess Management Python documentation as a potential source of deadlocks, with the added solution that you can call proc.communicate() to read from stderr. The problem in your case is that you can't call communicate() on all processes at the same time, and that method blocks until all data is read. If it were I, I would probably use a select() call on all of the stderr processes rather than a proc.poll() loop. select() can block until any process does something, and when the process exits, it will close the stderr pipe, so you kill two birds with one stone (know when data is written to stderr and know when the process the dies).
Managing the output streams of many subprocesses with deadlocks
1,661,351,478,000
qBittorrent-nox which was running perfectly until last week, but since then it always crashes on my Ubuntu 14.04. Theoretically it's logging, but the log file only conatines these lines: ******** Információ ******** A qBittorrent vezérléséhez, nyisd meg ezt a címet: localhost:8080 Web UI adminisztrátor felhasználó neve: admin Web UI adminisztrátor jelszó még az alapértelmezett: adminadmin Ez biztonsági kockázatot jelent. Kérlek változtass jelszót a program beállításinál. ******** Információ ******** A qBittorrent vezérléséhez, nyisd meg ezt a címet: localhost:8080 Web UI adminisztrátor felhasználó neve: weylyn1 ******** Információ ******** A qBittorrent vezérléséhez, nyisd meg ezt a címet: localhost:8080 Web UI adminisztrátor felhasználó neve: weylyn1 ******** Információ ******** A qBittorrent vezérléséhez, nyisd meg ezt a címet: localhost:8080 Web UI adminisztrátor felhasználó neve: weylyn1. So i would like to write a script, that will check in every 5 minute, whether qbittorrent-nox is running or not and if it's not running, then start it with # service qbittorrent-nox start (as root). However if it's running, then wait for 5 more minutes and check it again. I would like to use this workaround, until a solution is found for the crashing.
How to test if a daemon is running? It depends. Some daemons has a file with the process ID in say /var/run/foo.pid. An example of that is /var/run/crond.pid. $ cat /var/run/crond.pid 432 If the process is running it has a directory in /proc: $ ls /proc/$(cat /var/run/crond.pid) So if the directory in /proc does not exists we can do a restart. If qBittorrent has this pid-file you can do this: # cat <<EOF >/etc/cron.d/restart-qbittorrent-nox */5 * * * * /bin/test -e /proc/$(cat /var/run/qbittorrent-nox.pid)/cmdline || service qbittorrent-nox start EOF If you don't have any file in /var/run you have to use ps ax | grep qBittorrent to find the process. But the best solution would be to find out why the process crashes...
How to write a crontab script, that will check a process' status and launch it if not running?
1,661,351,478,000
I got this script named fork.sh: #!/bin/sh forkbomb() { forkbomb | forkbomb & } ; forkbomb If I call it through suexec my whole system will consume 99% cpu. To prevent a normal bash forkbomb I used the limits.conf and set nproc to 50. This works as expected. But if I call the above mentioned script through suexec over httpd, I see in top over 6000 tasks and a sys cpu use >97%. I can see multiple entries of user3 fork.sh with ~ 0,6% cpu. If I call systemd-cgtop the system.slice have 100% cpu and system.slice/httpd.service 75% I restricted httpd with cgroups: systemctl set-property --runtime httpd.service CPUShares=600 MemoryLimit=500M I don't get it, why ulimits and cgroups will not handle this issue.
The limits files are utilized by PAM. The stock suexec provided by apache still does not recognize or utilize PAM. Patches exist. And you can modify the source directly to invoke setrlimit (it looks pretty easy - see the setlimit(2) man page.) But as is, suexec will not recognize anything you do with limits. You can still set ulimits from within apache, but I think this is undesirable, especially in the pre-fork model, because then you limit the load your http server can handle. Further, I'm not sure if limits will carry over to the suexec environment because I'm not sure how suexec does its job. The reason CPUShares won't help you here is because your fork bomb is a resource hog, but the resources aren't really in the userland CPU cycles. By the time CPU accounting becomes involved, it's too late: your system has no free memory or process slots to try to run the program. You might try prlimit, part of util-linux so it should be standard with your installation. If you do this: $ prlimit --nproc=1 bash -c bash -c id bash: fork: retry: No child processes So the little 2-fork process failed. Unfortunately, I don't see a sound way you can get prlimit involved in the execution chain. Apache has screwed themselves and all of us by providing a deprecated suexec module that isn't flexible enough to meet the basic demand, and is straight-jacketed enough that any real solution requires blasting a hole through security.
Systemcrash suexec forkbomb.sh ulimits
1,661,351,478,000
I am using several instances (profiles) of Icedove (Thunderbird) and when I need to close all of them, I use: killall icedove According to man killall, if no signal name is specified, SIGTERM is sent. And, `SIGTERM` allows the process to perform nice termination releasing resources and saving state if appropriate. When I closed Icedove after an Addon installation, the configuration changes that I have made to the addon were lost. I had to repeat the steps and close Icedove properly (using exit in Menu). I understand that this is only anecdotal evidence. I don't have enough observation to make any conclusive claims. But still, is there any possibility to make the termination request even "nicer" than killall <processname> , so that the termination is as clean as if the application was closed using menu->Exit?
SIGTERM allows a process to perform cleanup before it terminates, but whether or not the process actually does so, and what sort of cleanup it performs, depends on how the program was written and (to an extent) on the facilities that the language the program was written in provides. So when a program receives SIGTERM it's not obliged to save anything, but it's quite likely that any files that it currently has open will get closed. There are "nicer" signals than SIGTERM, eg SIGUSR1, but a program will ignore such signals unless it's been written to specifically listen for them.
gracefully terminating processes with killall <processname>
1,661,351,478,000
I am using Linux. When i opened gedit program in gnome-terminal by gedit command it has opened the graphical gedit text editor. then the gedit has PPID bash ashokkrishna@ashokkrishna-Lenovo-B560:~$ ps -eaf | grep gedit ashokkr+ 1682 820 3 04:09 pts/6 00:00:00 gedit ashokkr+ 1695 1568 0 04:09 pts/9 00:00:00 grep --color=auto gedit here 820 is the PID of bash ashokkr+ 820 32505 0 03:32 pts/6 00:00:00 bash but when i opened same gedit by double clicking the gedit icon. ashokkrishna@ashokkrishna-Lenovo-B560:~$ ps -eaf | grep gedit ashokkr+ 1855 1982 14 04:16 ? 00:00:00 /usr/bin/gedit I got 1982 PPID which is init 1982 ? 00:00:00 init Now my question is why the parent process is different in both cases? what is the exact process initiates user processes?
What you're seeing should not surprise you. You've started gedit two different ways, via two different parents, so of course the PPID — parent process ID — is different in the two cases. The first is a child of Bash, because you started it from a Bash command line. The second child's initial process will be your OS's GUI system, but because it's being forked into the background, it gets orphaned, so init adopts it. This is the standard way of handling orphaned processes on a Unix/Linux system. The shell (Bash) simply isn't involved in the second case. Bash is a child of Gnome Terminal, which will be started by some core component of the system. I see upstart as the parent on my Ubuntu 14.10 box, but that will vary on different Linux and Unix systems. When the terminal closes, so will Bash, as will any programs started by Bash that haven't been let go into the background somehow. Ultimately, all processes are started by the kernel, usually via some wrapper around the execve(2) system call. But, you aren't going to see the kernel as a parent process here; the kernel acts on behalf of some user process, so that process gets recorded as the parent. The reason init(8) is not PID 1 is covered in another answer here.
Why PPID is different when opening from terminal and when opening by doubleclicking
1,661,351,478,000
You can list the process ID of each widow with this command: wmctrl -lp Does there exist a command that shows the running command of each window (kind of like htop has a column for "Command")? If not, how could you combine commands to ultimately achieve this?
This will replace the pid in wmctrl -lp’s output with the corresponding command, if one is found: wmctrl -lp | awk '{ pid=$3; cmd="ps -o comm= " pid; while ((cmd | getline command) > 0) { sub(" " pid " ", " " command " ") }; close(cmd) } 1' This obviously won’t work for windows displaying remote processes; it also will give strange results for windows corresponding to sandboxed processes in some cases (e.g. Flatpak). The AWK script reads each line, extracts the pid, and runs ps -o comm= to determine the corresponding command; if one is found, it replaces the corresponding pid string with the command.
List Running Commands of All Windows
1,661,351,478,000
I am tending to a program "master" which manages a set of concurrently running sub-processes "slaves". Sub-processes are launched and killed as needed. Many of these sub-processes use start-scripts. Output of pstree looks like this (excerpt, the master is implemented in Java, two slaves launched via script): systemd───java─┬─sh───slave ├─slave └─sh───slave Previously, the start-scripts redirected the slave's outputs to log files. It was decided that the master should handle the slave's outputs as well. The master's implementation was extended by adding a buffered reader like this: process = Runtime.getRuntime().exec(cmd); BufferedReader br = new BufferedReader(new InputStreamReader(process.getInputStream())); while (null != (line = br.readLine())) { // handle slave output here } The system then developed serious issues with slaves who had been killed (sent SIGTERM) by the master but in fact were still running. I noticed this happened only with slaves which met two criteria: they made use of a start-script they rarely wrote to standard output Since the master had not killed the slave, but only its immediate parent (the shell interpreter), the slave was now owned by init. In my case, systemd seems to be the default reaper. pstree lookes like this: systemd─┬─java───sh───slave └─slave Functionally, I solved this problem by explicitly killing the slave's entire family. Yet I still wonder: Why does systemd kill the orphaned child only if it writes to standard output (or error) and only if standard output was previously read by another process? The question is rather lengthy as it is. Upon request, I can supply a minimal code example to reproduce the behaviour described.
That's likely not systemd doing it. Instead, the process is killed by a SIGPIPE when it tries to write to a pipe where the read side has been closed -- which fits the description "standard output was previously read by another process."
Why do my orphaned grandchildren die only if they produce output?
1,661,351,478,000
I'm trying to use the pidof command to see my script is already running as I only want this executable if the script is not already running, however, it seems the pidof command is not returning the pid of the script using the name which is displayed in ps -ef output. Instead it is masking this name as either /usr/bin/python or /bin/su. Can someone shed some light on what is going on and how I can run pidof 'script.py -v' to see if the script is running or not? [root@cloud proc]# pidof python /some/dir/script.py -v > pidof: invalid options on command line! [root@cloud proc]# pidof "python /some/dir/script.py -v" > [root@cloud proc]# pidof "su - user -c python /some/dir/script.py -v" > [root@cloud proc]# ps -ef | grep script.py > root 5409 31739 0 13:07 pts/1 00:00:00 su - user -c python /some/dir/script.py -v > user 5414 5409 96 13:07 ? 01:00:40 python /some/dir/script.py -v [root@cloud proc]# ls -l /proc/5409/exe > lrwxrwxrwx. 1 root root 0 Oct 13 14:04 /proc/5409/exe -> /bin/su [root@cloud proc]# ls -l /proc/5414/exe > lrwxrwxrwx. 1 user user 0 Oct 13 14:04 /proc/5414/exe -> /usr/bin/python [root@cloud proc]# pidof /bin/su > 31715 6308 5409 [root@cloud proc]# pidof /usr/bin/python > 5414
pidof doesn't offer a way to specify /path/to/script to match commands of the form interpretername /path/to/script. It always looks at the filename of the executable listed in the /proc/pid/stat file. But it will work if your script begins with a shebang #! and if you invoke the script with /path/to/script. As an alternative, most GNU/Linux systems offer a pgrep command, which can match all or part of a command line. In your example, you can use pgrep -x -f 'python /some/dir/script.py -v' This will match the exact command line. If you want to match a partial command line, you can do something like pgrep -x -f 'python /some/dir/script.py .*' You can omit the -x to put an implicit .* at the beginning and end of the pattern, but this will also match your su - user -c python /some/dir/script.py -v process. As you mentioned in comments, a better way to assure your command doesn't run multiple instances at the same time is to use file locking, such as fcntl.flock.
Process name 'masked' by /usr/bin/python and /bin/su
1,661,351,478,000
I need to run a program xyz. It finishes execution in a few seconds. It has some signal handling I need to test. From the shell or a bash script how do I execute the program and while it is executing send it a signal like kill -14. Currently, the command I am trying, is /usr/local/xyz input > some_file 2>&1 & PIDOS=$! && kill -14 $PIDOS Does not seem to trigger the signal handling.
That command looks ok. Though when I tried that, it appears the command was too fast. It's as if my test script didn't have time to install the signal handler before it got shot. A test script: $ echo '$SIG{USR1} = sub { print "GOT SIGNAL\n" }; sleep 100' > sigtest.pl Shoot it immediately: (the sleep is there so the next prompt isn't immediately printed) $ perl sigtest.pl & kill -USR1 $! ; sleep 1 [1] 8825 [1]+ User defined signal 1 perl sigtest.pl It didn't print anything, but died to the signal. Let's give it some time: perl sigtest.pl & ( sleep 0.5 ; kill -USR1 $! ) [1] 8827 GOT SIGNAL [1]+ Done perl sigtest.pl Now it worked, the signal handler fired. (the signal interrupted the sleep, so the script exited anyway).
How do I run a process and send it a SIGNAL while its running?
1,661,351,478,000
I run the top command on my linux machine and I see that vim command take like 99.5% CPU PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 23320 gsachde 25 0 10720 3500 2860 R 99.5 0.2 30895:11 vim how to verify which script/program is it?
If you press c while in top then the command will be expanded to show the full command used to start the process. You can also take the PID and run: ps -ef | grep $PID Or: cat /proc/$PID/cmdline
linux + top command
1,661,351,478,000
Assume I have a long running process, long_running_proc with a single TCP connection to host host.example.com. Is that process treated differently, by the OS or the shell, when it's run as a foreground process vs background or behind screen? For instance: ~ long_running_proc --connect host.example.com ... vs. ~ screen ~ long_running_proc --connect host.example.com [ctrl-a] + d ~ vs. ~ long_running_proc --connect host.example.com & [1] 67539 ~ Are there different rules to process interruptions or context switches? Do they have a lower priority? Would I be more likely to get a TCP timeout with a screened/background process?
In general, by default the only difference is that it would receive a SIGTTIN (or SIGTTOU) signal if it tried to read (or write) the tty while being in the background. Other differences as to priorities or higher context switches depend on your shell (or screen) if it willingly does anything of that sort, such as changing the process’s “nice” number or maybe binding it to one particular CPU and if that CPU happens to be interrupted a lot. Normally shells don't do anything of this sort unless requested. Higher probability of getting TCP timeouts might be related to whether your process does get stopped by one of the above signals (due to attempted tty access), in which case it wouldn’t have any chance to receive and therefore reply to network traffic. If you think of it, daemon processes are the most “background” processes possible, and they certainly aren’t second-class processes. I can’t be exact about specifically screen’s detach operation but its documentation says that detached processes continue running, and that screen detaches itself from the process's tty, so the process goes on basically with no difference as to normal foreground or background operation. You would have difficulties at giving it commands, though, being your interactive terminal detached from the process's virtual terminal. This might not be good for your process if at some point it expects input from its terminal.
Are processes within screen treated differently from foreground processes?
1,661,351,478,000
I've over 10 windows of a file manager (Thunar, on Xubuntu Core 18.04) open, but ps aux|grep thunar shows nothing (except for the grepped string). Why? ps -e doesn't not show anything either. EDIT: I suspect a reason may be that the location (in the address bar) in those windows is an ejected external media. Another one may be that I didn't open the windows (they opened automatically when I plugged in the media), so the processes might not be mine. However, this still doesn't explain thunar not showing up. The problem persist after logout
Thunar used to use a capital 'T' in its name. The package for 18.04 has both Thunar and thunar in it. Try changing your grep command to ignore case: ps aux | grep -i thunar
File manager instances not showing among running processes
1,661,351,478,000
I am a little bit confused about /proc directory. Each process frequently updating its state, memory info, progress etc in their process. My question is that the /proc directory keeps the memory or writes on harddrive each information. What I believe that It frequently updating the information it takes IO operations and it no further uses when computer restart so it might be in the memory.
The /proc directory itself exists as an empty directory on the hard drive. It's contents, however, are added by the kernel without touching the disk. If you try to access /proc before it is mounted (say, booting your system with nothing but a shell with init=/bin/sh), it will be empty. You can replicate /proc on any directory with mount -t proc proc /path/to/directory. Just like ext4, fat32, etc., proc is a filesystems. (It is referred to as a pseudo filesystem because it cannot actually be used for storing files. If you try to do so, even as root, it will not work.) There are 'real' filesystems like proc that don't write to the disk, say ramfs/tmpfs. These filesystems don't actually write their files the disk, rather keeping them in the system ram. (If it isn't already there, I recommend adding the line tmpfs /tmp tmpfs rw 0 0 to your /etc/fstab so that temporary files written to /tmp don't actually get written to your disk.) There are a few other pseudo filesystems, like sysfs on /sys and devtmpfs on /dev. (/dev is slightly different. It isn't maintained by the kernel, and devtmpfs isn't always mounted over /dev, sometimes block files are written directory to the disk.)
Is /proc folder and process details really exists on hard drive [duplicate]
1,661,351,478,000
Is there a common utility (Ubuntu, perhaps OSX) that can run a server serve ./public, then run some tests ./run-chrome-tests.sh, and once the tests are finished, kills the serve ./public. This can be done in bash, but I'd rather create configuration, than code if it is feasible.
There is to my knowledge no such utility, but it is easily implemented in a shell script. A short shell script that implements what you described: #!/bin/sh serve ./public & serve_pid=$! ./run-chrome-tests.sh kill "$serve_pid" You may want to insert a sleep 3 call (or similar) after starting serve in the background, to allow it to initialize properly before running the testing script. $! will be the PID of the most recently started background job (the serve process). When the run-chrome-tests.sh script finishes, the script above will explicitly terminate the serve process by signalling using kill.
Utility that can be configured to run two commands, and kill both when one finishes
1,661,351,478,000
I've just accidentally opened rockyou.txt in Kali Linux on a fairly slow computer. It has now been sitting on the desktop loading the 30million words for over an hour. It is not frozen, as I can still use the mouse, and the clock display is still changing, however, I cannot cancel, close or open anything else. Is there anyway I can close it or kill it without having to restart? I was also wondering if there is anyway of searching for a specific word within rockyou, like an online database, instead of loading it and Ctrlf?
It should be possible to go to the terminal by typing Ctrl-Alt-F1, logging in and searching for offender with top, then remembering it's name or pid and killing it: by pid: kill -KILL pid by name: pkill -KILL -f name SIGKILL will make it go away if it's not hanged "inside kernel", i.e. there is bad syscall which does not release the task back to you into userspace. Such situations occur when the program is doing large disk I/O. If it's not even possible, then only Alt-SysRQ can help, or even logging in remotely (if remote service such as ssh was enabled). Many sites refer to trying Alt-SysRQ-R, but it never worked for me well with X11 (and Ctrl-Alt-Backspace thing, perhaps it is disabled by default). Another try is to kill everything with Alt-SysRQ-E/I, but it will kill everything, not just the offender. If all these ways were exhausted, then, only hard reset. It's also possible that kernel will kill it automatically with OOM killer mechanism (since it tries to load much words inside memory).
Opened file with several million lines: how to close it?
1,661,351,478,000
I've created a very large and complex python program and I now know it has a serious bug that I'm having a very hard time pinning down. I'm using this code in a production environment so I need a stop-gap measure to implement until I find and correct my coding issue. I need to create a bash script that I can use to check for CPU usage of my python program and kill it if it's consistently below x%. Once killed it will automatically restart on it's own. I'm using the following to get my PID and %CPU $ ps -eo pid -eo pcpu -eo command |grep python |grep pycode.py 2940 71.9 python pycode.py How can I check %cpu, which is 71.9 above, against x% cpu and then kill the PID if needed. Also, the python program does not go runaway nor does it die. It simply drops to below 5% cpu and stays there and the UI freezes. I'm new to bash so I really don't know where to start.
Here's a crude attempt: read -r pid cpu rest < <(ps -eo pid -eo pcpu -eo command |grep python |grep pycode.py) if (( ${cpu%.*} < 5 )) ; then kill -TERM $pid fi We use ${cpu%.*} to truncate it to an integer, since bash can't handle floats. This only runs once; if you want to keep it going, put it on a cron job, or put it in a loop with sleep 5 or whatever.
Need script to kill python process with low CPU usage
1,661,351,478,000
Is there a common way or existing utility to do the following? kill a process Give it a few seconds to shut down gracefully kill -9 it if it hasn't stopped
Usually I try to keep things as simple as: kill $pid; sleep 5; kill -9 $pid Or you can search a process by its name if you like: pkill $pattern; sleep 5; pkill -9 $pattern This is handy when you are working in a terminal, but for scripting you may prefer a more sophisticated solution from another answer.
Idiomatic way to kill -9 only if "graceful" way doesn't work?
1,661,351,478,000
I had some system slowdown event1 after mistakenly using a command so I thought I could use a desktop "widget" to visually show quality of service or at least show when QoS was degraded and get some timely feedback. We have this natural ability to perceive degradation in the playback of an image sequence so I wanted to leverage this. Therefore I selected an mp4 video sequence2 and made a .gif file out of a segment of it with ffmpeg: ffmpeg -i snf.mp4 -ss 00:01:24.0 -t 00:00:04.03 -s qcif -qscale:v 10 -an output.gif So this will be a stamp size qcif (176x144) gif image. In this example I extracted some 4.03 seconds starting at 1:24 into the sequence. Makes for a 1,3MiB file. Which I then animate: animate -borderwidth 0 output.gif & Then I "float" that tiny window in my tiling window manager, and keep that handy. The sequence loops and if you click on the image you get an imagemagick "display"-like menu where you have that (beautiful)auto-reverse option... so the clip plays for its 4 seconds, then plays backwards and back and forth! I thought maybe I could start this with ionice -c 2 -n 0 (but there can't be i/o here really) or renice it with renice -n -10 but i can't come to terms with how to "expose" this to "load" so that it would be the first thing impacted if there is a slowdown. From what I understand, if the execution is too nice, it should get impacted all the time but that wouldn't be related to a system slowdown, just to being "bumped". If it's not nice at all, then it wouldn't have to face any slowdowns unless in the most dire of states and that would defeat the purpose. How can I set my command to be impacted in priority(as opposed to having or not having execution priority) to other programs on my desktop so as to serve as a QoS widget? Or is that just misguided and since load balancing is all about heuristics I won't be able to use something like that? ps aux spec output for animate: 14454 0.3 2.9 228232 118768 pts/5 Sl 05:06 2:36 animate -borderwidth 0 test2.gif 1. I issued a wrong command using convert(ImageMagick) and it eventually exhausted system resources and self-terminated - I thought I was doing something legit so I let it go. During that time, I saw my system slowing down as the load increased; windows not refreshing, htop seemingly freezing, intense disk usage, and jerkiness with the mouse pointer. As the system was coming to a crawl, I noticed that looking at the movement of the pointer gave me a good "feel" of the slowdown so to speak. More so I found than looking at the load indicator reach 7-8 in my status bar(i3). Generally speaking for real time monitoring and information I use htop and the i3 status bar. But I'm also interested in different kinds of system feedback. 2. To help with my perceptual abilities, I selected a dancing clip. Some time ago I saw on youtube this famous dance sequence from Saturday Night Fever(Bee Gees - "You should be dancing") with M. John Travolta taking on the whole dancefloor. His landmark move is rythmic and the lights beat in a discernable pattern.
I hope you're aware there are already a lot of system monitoring widgets. But anyway: NOTE: Depending on your setup, there may be a dedicated hardware path for video. So its possible this doesn't really require any CPU time. But while ffmpeg may use that, animate probably doesn't. NOTE: For your animation to slow down noticeably, it may have to be using a non-trivial amount of CPU time. That's going to make the CPU use more electricity and run hotter. If you have dynamic fan speeds, it'll make your machine louder. First, positive numbers to nice are less priority, so you'd want to use 19 (the lowest priority). Second, there is actually a better option: you can change the scheduling policy, at least on Linux. There is a schedtool program that can supposedly do this (or you can use sched_setscheduler in C). If you set your policy to SCHED_IDLE, that is an even lower priority than niceness 19. Note also that if your other processes are out of memory (i.e., your system is thrashing to death) then your animation may not notice first, as its not requesting memory. OTOH, a swap activity monitor will pick that up very quickly. Finally, there are several ways to prevent runaway programs from making your system unusable: set a ulimit to prevent excessive resource usage, or use something like ulatencyd to automatically throttle them using cgroups.
How can I make my command more susceptible to system slowdowns so as to use it as a visual QoS widget on my desktop?
1,681,866,405,000
I am using a Debian 9.13. Trough ps -aux | grep NaughtyProcessName i can find information about a given process that interests me in the format: user.name [ID] [CPU USAGE] [%MEM] VSZ RSS TTY STAT START TIME COMMAND Where command shows something like: path/to/interpreter ./file_name.cmd So i suppose some user was inside a mysterious directory which had file_name.cmd inside it and spawned a process by doing ./file_name.cmd. The process uses the interpreter found in path/to/interpreter. I want to know in which directory this file is. The only thing i know that i could try is cd / find -iname file_name.cmd But that takes time and could find duplicates. Is there anything better and more straight to the point?
Given a process id <pid>, then /proc/<pid>/cwd is a symlink to the working directory for that process. That is, if I run python ./example.py from ~/tmp/python, in ps I will see: $ ps -f -p 118054 UID PID PPID C STIME TTY TIME CMD lars 118054 6793 0 09:16 pts/1 00:00:00 python ./example.py And in /proc/118054/cwd, I see: $ ls -l /proc/118054/cwd lrwxrwxrwx. 1 lars lars 0 Aug 31 09:16 /proc/118054/cwd -> /home/lars/tmp/python So you can use that information to infer that ./example.py refers to /home/lars/tmp/python/example.py. Note, however, that you cannot trust the information you see in the output of ps. Consider this simple C program: #include <string.h> #include <stdlib.h> #include <stdio.h> #include <unistd.h> int main(int argc, char **argv[]) { pid_t pid = getpid(); printf("pid %d\n", pid); memset(argv[0], ' ', strlen(argv[0])); strcpy(argv[0], "ls"); sleep(600); return 0; } If we run this: $ ./example pid 119217 And then look at ps: $ ps -f -p 119217 UID PID PPID C STIME TTY TIME CMD lars 119217 6793 0 09:25 pts/1 00:00:00 ls It looks like we're running something completely innocuous.
Find filepath that spawned process
1,681,866,405,000
I have a laptop with minimal resources and this process, "gnome-software", takes up huge space in RAM. I have to kill it every time. Is there permanent way to stop this process?
gnome-software is the GNOME frontend to the PackageKit, a GUI utility to install and update packages. If it bothers you, you can uninstall it via apt-get remove gnome-software and install/update software via CLI using apt-get.
How to stop a process permanently for every session?
1,681,866,405,000
I'm trying to know if some GUI process is idle o minimized in Linux, using Net-SNMP. I've been doing research and as far as I know, SNMP seems to be designed for monitoring services, not processes run by regular users. I've found just one MIB object, hrSWRunStatus (RFC 2790), which has only four running statuses: running(1), runnable(2), notRunnable(3) and invalid(4), but testing by maximizing and minimizing some GUI applications don't display any changes in their respective statuses, in fact, every process listed with snmpwalk has runnable(2) status, except one: snmpd which is listed as running(1). # snmpwalk -v 2c -c public localhost .1.3.6.1.2.1.25.4.2.1.7 | grep "running(1)" HOST-RESOURCES-MIB::hrSWRunStatus.920 = INTEGER: running(1) # snmpwalk -v 2c -c public localhost .1.3.6.1.2.1.25.4.2.1.2 | grep 920 HOST-RESOURCES-MIB::hrSWRunName.920 = STRING: "snmpd" Even using ps I don't see a change in the status of a process I'm using at the moment (except for htop). If htop is running in a terminal console, like konsole and I'm writing a text with kate, none of those processes has the status "R" (running or runnable), just "S" (interruptible sleep), which I found weird, but is supposed to be in that way... https://supportcenter.checkpoint.com/supportcenter/portal?eventSubmit_doGoviewsolutiondetails=&solutionid=sk112953 So, how can I know, in Linux, if some process is idle or minimized...?
In the comments you said you want to develop a time tracking app, for tracking application usage. I guess you might do it by tracking which window is the active one at any given time. To do that, you would need to get access to the user's X11 session, and then repeatedly query its X11 property named _NET_ACTIVE_WINDOW. This code example might be helpful to you: https://github.com/UltimateHackingKeyboard/current-window-linux/blob/master/get-current-window.c If the system uses Wayland instead of classic X11, unfortunately Wayland might require its own solution; I simply don't know enough about that one.
Identify idle or minimized process
1,681,866,405,000
Eg. Processes being run by various users are as below. root 5 xuser 3 yuser 1 Then the script should give the output as: root ..... xuser ... yuser .
You can use bash printf and tr to do this histogram: while read name num; do dots=$(printf "%*s" $num " " | tr " " .) printf "%s\t%s\n" "$name" "$dots" done <<END root 5 xuser 3 yuser 1 END root ..... xuser ... yuser .
Shell Script to fetch linux processes and show the process count for individual user as "." [closed]
1,681,866,405,000
I have a simple bash script that checks if a program is running and actions accordingly. #!/bin/bash check_running=$(pgrep -x redshift) if [[ -n "$check_running" ]]; then echo "1" else echo "0" fi If I execute the script normally (./script) then it will always return 1. But if I use "bash -x script" then it returns the correct outcome ❯ bash -x redshift ++ pgrep -x redshift + check_running= + [[ -n '' ]] + echo 0 0 I have a similar script checking if openvpn is running and it returns the correct value via regular execution. Here it is in full: ~/.config/polybar/scripts ❯ pgrep -x redshift ~/.config/polybar/scripts ❯ ./redshift 1 ~/.config/polybar/scripts ❯ bash -x redshift ++ pgrep -x redshift + check_running= + [[ -n '' ]] + echo 0 0 What am I doing wrong?
When you run ./redscript, pgrep -x redscript will match that script's process, so check_running will have a PID. You can put a set -x in the script, or use #! /bin/bash -x as the shebang, to verify this.
Bash script if statement returning incorrect result while bash -x works
1,681,866,405,000
Is there a program in Linux to control which processes are allowed to run with some kind of control list? So that, when you will try to run a process that is not in the list you will be notified about it and asked if to add it to the list of allowed processes.
No. The Unix security model is based on users and resources. It is designed to control which users have access to which resources. Resources are mostly exposed as files, and access control is done through file permissions. Processes are merely agents of the user. There is no restriction on what code a user may run. There are restrictions on which files a user may run, but this is generally not a practical restriction¹ as users can put new code in a new file and execute that. You could set up a wrapper script around an executable to prompt the user “are you sure you want to run this program?”. But this would be pretty annoying and pointless: users could run the program directly (or install their own copy). There may be a way to solve the actual problem you have, but it wouldn't be “allowing a process to run”. ¹ It's a restriction only in two cases: the permissions on executables that elevate privileges (setuid/setgid) restrict which users can elevate privileges, and accounts that cannot create an executable file at all (restricted accounts) cannot execute arbitrary code.
List of processes allowed to run
1,681,866,405,000
I'm running Ubuntu 16.04 I have this process X running from multiple tty. I run it from other pseudo terminals using the screen command and it also runs from crontab. This program is launched from a python script which is launched from a bash script. Sometimes, the python script gets an exception which I'm not able to catch and it leaves this process X running. I would like to kill this process X at the end of the bash script but to avoid killing other X processes which are running from other terminals. I thought of doing pgrep to process X which is not associated with other terminals using the -t argument but I couldn't understand the syntax from the documentation.
Start a new process group for the Python code (using the setsid utility) and whatever other processes (including the application proper) it starts, so you can kill the entire process group if needed. You can use the following construct to do so: exec 3>&1 pgid=$(exec setsid bash -c 'echo $$; exec >&3-; exec COMMAND...') where COMMAND... is the command and its parameters you'd normally start. Note that it is within single quotes, and that the command to be run must be evaluatable as a string (as opposed to a normal shell expression). The first redirection, 3>&1, copies the standard output descriptor to descriptor 3. The redirection >&3- moves descriptor 3 to standard output. The $(...) executes the ..., and evaluates to the data it wrote to standard output. Above, we read the standard output into shell variable pgid. The subshell (...) is replaced with the setsid utility, which executes its argument in a new session (process group). Here, it executes bash, which prints out the current process PID ($$), moves the original standard output from descriptor 3 back, and replaces itself/executes the desired COMMAND.... The shell will execute that line for as long as the COMMAND... runs, and will only progress to the next line after COMMAND... itself exits. If the COMMAND... leaves spurious processes running, and you want to kill them, all you need to do is run kill -SIGNAL -$pgid The issue is which signal to send. KILL will immediately kill the leftover processes in the process group, so it is the simplest option. However, if the processes behave nicely, you should be able to ask them to exit by sending them a TERM signal instead. Of course, sometimes the processes may be left in a wonky state, so that they don't react to TERM and do need to be KILLed. To solve this, you can use a small loop, and ps to see if there are any processes left: retries=50 while ((1)); do # Processes left? left=$(ps -o pid= -s $pgrp) [ -n "$left" ] || break # Ask them to terminate. kill -TERM $left # Wait 0.1 seconds. sleep .1 # Decrement the retry counter. ((--retries > 0)) || break done # If there are any processes left in the group, # send them a KILL signal. left=$(ps -o pid= -s $pgrp) [ -n "$left" ] && kill -KILL $left Note that the 50 retries (0.1 seconds each) mean that this waits only up to 5 seconds before sending a kill signal. That may not be a suitable value, depending on the application and the kind of hardware it is running. For example, if the machine is a laptop, and the application saves some history or logs into a number of files, and the drive happens to be sleeping at the exit point, I'd up the delay to perhaps 15 to 30 seconds.
Custom "garbage collector" to manually close a program
1,681,866,405,000
I have 2 related Doubts about Bash. (Q1) Consider tail -f SomeFile | wc, a fictitious command-line, where tail is used to simulate a command (C1) which runs for a long time, with some output from time to time, and wc is used to simulate a command (C2) which processes that output when C1 finishes. After waiting for a long time (longer than usual) I want to see what output has been generated till now, so I want C1 to terminate. But pressing Ctrl-C will kill this whole pipeline. How can I kill only C1 ( or even a component of C1, if that is itself a compound command ) ? If C1 had been looping over many files and grepping some text, but one file was from some hung nfs server, then I want to kill only that grep process. (for F in A B C D E ; do grep sometext $F ; done) | wc Here Ctrl-C will kill the whole command-line, but I want to kill only the currently running (or hung) process and continue with remaining files. One solution I have, is to open a new connection, get the ps output, and "kill" it. I was wondering if there was a solution from Bash itself, such that some strange key-combination kills only the current process ? (Q2) While trying to make examples for this question, I made this command-line,Here, where if I press Ctrl-C, I get an extra line of output, like this: # echo `ping 127.0.0.1` | wc ^C # When backticks (``) are not used, the extra line is not there: # tail -f SomeFile | wc ^C # Am I correct in thinking that since backticks (``) are handled by bash itself and, when the sub-process is killed, it is still considered as "empty output", so that is printed as the extra line ?
In bash you can run: cmd1 | cmd2 | (trap '' INT; cmd3) And a Control-C will only kill cmd1 and cmd2, but not cmd3. Example: $ while sleep .1; do echo -n 1; done | (trap '' INT; tr 1 2) ^C222222222 $ while sleep .1; do echo -n 1; done | tr 1 2 ^C This takes advantage of the fact that a signal disposition of "ignore" is inherited by subprocesses -- the trap '' INT will also affect the tr command. But of course, some commands install their own SIGINT handlers, which will break this assumption. Unfortunately, this doesn't work in ksh93 because of a stupid bug. A workaround there could be: ksh93$ while sleep .1; do echo -n 1; done | sh -c 'trap "" INT; exec tr 1 2' ^C222222222ksh93$
How to kill only current process and continue with shell pipe-line?
1,681,866,405,000
Why is the parole(just a media palyer) still listed in the "JOBS" command output even after killing maually and why is not listed in the "PS" command output ? Does this mean the process is still runnig in background(ps:when i issued kill command the media playe[parole] is closed))? If running why is it not listed in the "PS" command output ? If not running what is the meaning of the output for "JOBS" command.
jobs will apear one last time after any command once terminated. see below > sleep 30 & [1] 134042 > ps PID TTY TIME CMD 134009 pts/4 00:00:00 bash 134042 pts/4 00:00:00 sleep 134043 pts/4 00:00:00 ps > kill 134042 > date Mon Aug 3 22:11:58 CEST 2020 [1]+ Terminated sleep 30 > jobs > according to man bash, JOB CONTROL The shell learns immediately whenever a job changes state. Normally, bash waits until it is about to print a prompt before reporting changes in a job's status so as to not interrupt any other output. so you can even use an empty command (i.e. press return), to see the done line.
Why is a process being shown while using "jobs" command after killing the process manually?
1,681,866,405,000
I'm trying to determine whether a command I'm running is within an SSH session. Usually this works fine by checking for $SSH_CONNECTION or walking the process tree and looking for sshd. However, if I start a screen session locally and then re-attach it through SSH, neither of those works. Is there some way from within the reattached screen session to determine which shell the session is currently attached to? The process tree just looks like shell(X) --> screen(Y) --> systemd(1), which makes sense, since the screen session probably gets reparented when I exit the local terminal. screen -ls does not say anything more than (Attached), with only the PID Y, no helpful PID of where it is currently attached. The process tree of shell(A) where it is attached includes a single child screen(B), but I cannot find a way to link the PIDs Y and B. I even tried to find the other end of the unix socket being used by screen but it comes up empty. (even checked as root). Is this just something that isn't possible?
After a lot of experimentation, here's what I ended up with: Find the screen the shell is running under. Keep walking the pstree until a screen process is found: screen_pid=$(pstree -psUA $$ | egrep -o 'screen\([0-9]+\)' | tail -1 | egrep -o '[0-9]+') Look at all opened files for that process. Find the only /dev/pts/* file in that list: screen_pts=$(lsof -p $screen_pid | grep /dev/pts | awk '{print $NF}') Find the screen process controlling that psuedo-terminal: ps -o pid=,tty= -C screen | grep ${screen_pts/\/dev\/} | awk '{print $1}' From there the parent process will be the shell/ssh/whatever started the screen which is now attached to the shell. There are definitely some hacky assumptions made here that "work on my machine(tm)", but that's the general idea. If reliability is required, using stat with st_rdev will eliminate the hacky /dev/pts/5 -> pts/5 replacement. And something similar could be used to filter the list of open files where major(st_rdev) == some value that represents pseudo terminals.
Determine the shell from which screen reattach was called
1,681,866,405,000
I wrote a shell.nix file to build the development environment for one of my projects. I'm using a shellHook to ensure a postgresql server is started when you drop into the nix-shell. The shellHook is essentially: export PGDATA=$PWD/nix/pgdata pg_ctl start --silent --log $PWD/log/pg.log Despite the fact that pg_ctl starts a server in the background, if I type Ctrl-C in the shell, the server shuts down. If I set up the same scenario outside of nix-shell, this does not happen. I'm new to strace, but it looks to me like the postgresql process is receiving SIGINT when I type Ctrl-C in my terminal: $ strace -p $postgres_pid strace: Process 20546 attached select(6, [3 4 5], NULL, NULL, {tv_sec=51, tv_usec=289149}) = ? ERESTARTNOHAND (To be restarted if no handler) --- SIGINT {si_signo=SIGINT, si_code=SI_KERNEL} --- rt_sigprocmask(SIG_SETMASK, ~[ILL TRAP ABRT BUS FPE SEGV CONT SYS RTMIN RT_1], NULL, 8) = 0 write(2, "LOG: received fast shutdown req"..., 37) = 37 kill(20550, SIGTERM) = 0 ... The postgresql process is attached to the same controlling terminal (pts/12) as my nix-shell process (though this is also true when I run it outside of nix-shell): $ ps -p ${postgres_pid},${nixshell_pid} -o pid,ppid,wchan,tty,cmd PID PPID WCHAN TT CMD 14608 18292 core_s pts/12 bash --rcfile /tmp/nix-shell-14608-0/rc 16355 1 core_s pts/12 /nix/store/xxxxxx-postgresql-9.6.8/bin/postgres What's a good next step in debugging this? Should I read up on process groups? Update: Trying a tip from another question, I found that this fixes the problem: set -m pg_ctl start --silent --log $PWD/log/pg.log The weird thing is, according to $-, the m option was already set. Running echo $- produces imBH both before and after the set -m. I noticed that in my interactive shells (whether nix-shell or not), $- is imBHs. The s is not present in the shellHook context, and I can't find an explanation of its meaning in the docs for Bash's set builtin. This may not be related though...
It seems the problem was that the postgresql server was running as part of the same process group as the shell that launched it via pg_ctl. Typing propagated a SIGINT to all processes in the group. One way to fix this is to launch postgresql in its own session using setsid. setsid pg_ctl start --silent --log $PWD/log/pg.log That said, I still don't know why this only happens in the context of shellHook.
Background process (postgresql) receiving SIGINT from Ctrl-C in shell
1,681,866,405,000
First of all hello, and thank you for taking the time to read my question. Update: What my desired outcome with this question is to know the best way to handle a browser process using up all the memory via automation. to return the memory via end of process or some other way if there is one. the process in question is browsers, i do a lot of research and have a lot of tabs open when i notice the lag starting to happen i have a few seconds to end the process to gain back the memory or the system will freeze. In spirit of not re-writing something that has already been made. I wanted to ask before i write a basic script to handle this. It also would be very interesting to know the best practice to handle this. Please let me know should you require further information from me to be able to answer this question. Thank you in advance.
I'm not sure if this is the best practice but i got away with just creating a one-liner that checks if greater then 80% then end the process. [ $(free -m| grep Mem | awk '{ print int($3/$2*100) }') -gt "80" ] && pkill application || echo "Not Over 80%" Please note that this one-liner is matched up from other points of code and still testing how it works but so far it has worked well for me. I Make no claims on how it might work for you but might be a good starting point. If someone has a better option i'm still eager to learn so please do share your knowledge.
How can i automatically handle hogging process before system freeze?
1,681,866,405,000
I am trying to build a terminal based GUI for a tool. The following code invokes something like this while true do CHOICE=$(dialog --keep-window --clear --no-shadow \ --backtitle "$BACKTITLE" \ --title "$TITLE" \ --menu "$MENU" \ $HEIGHT $WIDTH $CHOICE_HEIGHT \ "${OPTIONS[@]}" \ 2>&1 >/dev/tty) clear case $CHOICE in #*) exec vim "$(echo $CHOICE | cut -d ':' -f 1)" ; ;; *) filename="$(echo $CHOICE | cut -d ':' -f 1)" #mkfifo "$TOMATO_DIR/cf" if [ ! -z $filename ] ; then dialog --editbox $filename 60 80 #cp "$TOMATO_DIR/cf" $filename #rm -f ${INPUT} else clear exit 0 fi clear ;; esac done And on pressing ENTER and editbox as following opens: I tried opening the file in vim but on saving the file, the tools exits. I want to know, how to open the file and return to the tool on saving or exiting from vim ?
exec is a shell builtin as per bash man page (be patient, it is far away) exec [-cl] [-a name] [command [arguments]] If command is specified, it replaces the shell. No new process is created. consider 2 script exec ls pwd and ls pwd if you execute the first shell, exec ls command will replace shell (discarding remaining input), pwd command will never get exectued.
Edit file with vim using Dialog
1,681,866,405,000
I'm looking for process manager which can be controlled from CLI (add, start, stop, delete), so I can control it programmatically. I've tried using https://github.com/circus-tent/circus, but the problem is when I add it from CLI, the processes is disappear after server restart. I opened an issue there; https://github.com/circus-tent/circus/issues/937. I didn't try Supervisord yet, but it seems has the same issue, https://github.com/mnaberez/supervisor_twiddler/issues/4. Is there any process manager which can add daemon process from CLI, and the changes is persist after restart, without touching the configuration file? Thanks. I'm on Centos 7, I want to daemonize a PHP CLI script for each registered user. Sorry I'm not sure how can I explain this better. I have a PHP CLI script which has infinite loop. The script is running to listen to new incoming message. The script should be started on new registered user e.g php listen.php --user_id=111, and stopped on deleting user.
The package I recommend for this is called daemontools by Dan Bernstein. This is a collection of tools to provide system-wide service supervision and to manage services. It not only cares about starting and stopping services, but also supervises the service daemons while they are running. Amongst other things, it provides a reliable interface to send signals to service daemons without the need for pid-files, and a log facility with automatic log file rotation and disk space limits. It satisfies all of your requirements. It's ultra-reliable, once you set it up and understand how to use it, it requires very little maintenance. If there's a problem in your system, it won't be daemontools. All control is via command line. The daemons will be restarted on sytem restart. The daemons can be stopped, started, stopped, suspended from the CLI. Plus, handles logging for each daemon too. It's manages fast restarts (when a program dies quickly). This package and underlying design are rock solid. The source code hasn't changed in years, but don't let that fool you. It hasn't needed to change because it's correct. I've personally used this package to reliably control hundreds of daemon processes on one machine at a time. Configuration of a new client is easy, just place a control file in the specified directory and it will be automatically started and restarted forever, unless you intervene. Once you know what the file should look like then you make a template or a way to parameterize the control file creation. I think your best bet is to get the RPM source package from kteru on github and build your own RPM from it. It's easy to build, but the RPM will make it easier to manage and replicate your system. The homepage and documentation are located at http://cr.yp.to/daemontools.html The CentOS 4-7 RPM source package is available on github: https://github.com/kteru/daemontools-rpm There's also a package called runit that I think is a branch of daemontools without some of the licensing and distribution restrictions of daemontools and a more flexible directory layout policy. It is in the Debian repositories, I don't know about CentOS.
Process management - add daemon process from CLI
1,377,261,966,000
I'm wondering about the way Linux manages shared libraries. (actually I'm talking about Maemo Fremantle, a Debian-based distro released in 2009 running on 256MB RAM). Let's assume we have two executables linking to libQtCore.so.4 and using its symbols (using its classes and functions). For simplicity's sake let's call them a and b. We assume that both executables link to the same libraries. First we launch a. The library has to be loaded. Is it loaded in whole or is it loaded to the memory only in the part that is required (as we don't use each class, only the code regarding the used classes is being loaded)? Then we launch b. We assume that a is still running. b links to libQtCore.so.4 too and uses some of the classes that a uses, but also some that aren't used by a. Will the library be double loaded (separately for a and separately for b)? Or will they use the same object already in RAM. If b uses no new symbols and a is already running will the RAM used by shared libraries increase? (Or will the difference be insignificant)
NOTE: I'm going to assume that your machine has a memory mapping unit (MMU). There is a Linux version (µClinux) that doesn't require an MMU, and this answer doesn't apply there. What is an MMU? It's hardware—part of the processor and/or memory controller. Understanding shared library linking doesn't require you to understand exactly how an MMU works, just that an MMU allows there to be a difference between logical memory addresses (the ones used by programs) and physical memory addresses (the ones actually present on the memory bus). Memory is broken down into pages, typically 4K in size on Linux. With 4k pages, logical addresses 0–4095 are page 0, logical addresses 4096–8191 are page 1, etc. The MMU maps those to physical pages of RAM, and each logical page can be typically mapped to 0 or 1 physical pages. A given physical page can correspond to multiple logical pages (this is how memory is shared: multiple logical pages correspond to the same physical page). Note this applies regardless of OS; it's a description of the hardware. On process switch, the kernel changes the MMU page mappings, so that each process has its own space. Address 4096 in process 1000 can be (and usually is) completely different from address 4096 in process 1001. Pretty much whenever you see an address, it is a logical address. User space programs hardly ever deal with physical addresses. Now, there are multiple ways to build libraries as well. Let's say a program calls the function foo() in the library. The CPU doesn't know anything about symbols, or function calls really—it just knows how to jump to a logical address, and execute whatever code it finds there. There are a couple of ways it could do this (and similar things apply when a library accesses its own global data, etc.): It could hard-code some logical address to call it at. This requires that the library always be loaded at the exact same logical address. If two libraries require the same address, dynamic linking fails and you can't launch the program. Libraries can require other libraries, so this basically requires every library on the system to have unique logical addresses. It's very fast, though, if it works. (This is how a.out did things, and the kind of set up that prelinking does, sort of). It could hard-code a fake logical address, and tell the dynamic linker to edit in the proper one when loading the library. This costs a fair bit of time when loading the libraries, but after that it is very fast. It could add a layer of indirection: use a CPU register to hold the logical address the library is loaded at, and then access everything as an offset from that register. This imposes a performance cost on each access. Pretty much no one uses #1 anymore, at least not on general-purpose systems. Keeping that unique logical address list is impossible on 32-bit systems (there aren't enough to go around) and an administrative nightmare on 64-bit systems. Pre-linking sort of does this, though, on a per-system basis. Whether #2 or #3 is used depends on if the library was built with GCC's -fPIC (position independent code) option. #2 is without, #3 is with. Generally, libraries are built with -fPIC, so #3 is what happens. For more details, see the Ulrich Drepper's How to Write Shared Libraries (PDF). So, finally, your question can be answered: If the library is built with -fPIC (as it almost certainly should be), the vast majority of pages are exactly the same for every process that loads it. Your processes a and b may well load the library at different logical addresses, but those will point to the same physical pages: the memory will be shared. Further, the data in RAM exactly matches what is on disk, so it can be loaded only when needed by the page fault handler. If the library is built without -fPIC, then it turns out that most pages of the library will need link edits, and will be different. Therefore, they must be separate physical pages (as they contain different data). That means they're not shared. The pages don't match what is on disk, so I wouldn't be surprised if the entire library is loaded. It can of course subsequently be swapped out to disk (in the swapfile). You can examine this with the pmap tool, or directly by checking various files in /proc. For example, here is a (partial) output of pmap -x on two different newly-spawned bcs. Note that the addresses shown by pmap are, as typical, logical addresses: pmap -x 14739 Address Kbytes RSS Dirty Mode Mapping 00007f81803ac000 244 176 0 r-x-- libreadline.so.6.2 00007f81803e9000 2048 0 0 ----- libreadline.so.6.2 00007f81805e9000 8 8 8 r---- libreadline.so.6.2 00007f81805eb000 24 24 24 rw--- libreadline.so.6.2 pmap -x 17739 Address Kbytes RSS Dirty Mode Mapping 00007f784dc77000 244 176 0 r-x-- libreadline.so.6.2 00007f784dcb4000 2048 0 0 ----- libreadline.so.6.2 00007f784deb4000 8 8 8 r---- libreadline.so.6.2 00007f784deb6000 24 24 24 rw--- libreadline.so.6.2 You can see that the library is loaded in multiple parts, and pmap -x gives you details on each separately. You'll notice that the logical addresses are different between the two processes; you'd reasonably expect them to be the same (since its the same program running, and computers are usually predictable like that), but there is a security feature called address space layout randomization that intentionally randomizes them. You can see from the difference in size (Kbytes) and resident size (RSS) that the entire library segment has not been loaded. Finally, you can see that for the larger mappings, dirty is 0, meaning it corresponds exactly to what is on disk. You can re-run with pmap -XX, and it'll show you—depending on the kernel version you're running, as -XX output varies by kernel version—that the first mapping has a Shared_Clean of 176, which exactly matches the RSS. Shared memory means the physical pages are shared between multiple processes, and since it matches the RSS, that means all of the library that is in memory is shared (look at the See Also below for further explanation of shared vs. private): pmap -XX 17739 Address Perm Offset Device Inode Size Rss Pss Shared_Clean Shared_Dirty Private_Clean Private_Dirty Referenced Anonymous AnonHugePages Swap KernelPageSize MMUPageSize Locked VmFlagsMapping 7f784dc77000 r-xp 00000000 fd:00 1837043 244 176 19 176 0 0 0 176 0 0 0 4 4 0 rd ex mr mw me sd libreadline.so.6.2 7f784dcb4000 ---p 0003d000 fd:00 1837043 2048 0 0 0 0 0 0 0 0 0 0 4 4 0 mr mw me sd libreadline.so.6.2 7f784deb4000 r--p 0003d000 fd:00 1837043 8 8 8 0 0 0 8 8 8 0 0 4 4 0 rd mr mw me ac sd libreadline.so.6.2 7f784deb6000 rw-p 0003f000 fd:00 1837043 24 24 24 0 0 0 24 24 24 0 0 4 4 0 rd wr mr mw me ac sd libreadline.so.6.2 See Also Getting information about a process' memory usage from /proc/pid/smaps for an explanation of the whole clean/dirty shared/private thing.
Loading of shared libraries and RAM usage
1,377,261,966,000
Let's say I'm running a script (e.g. in Python). In order to find out how long the program took, one would run time python script1.py Is there a command which keeps track of how much RAM was used as the script was running? In order to find how much RAM is available, one could use free, but this command doesn't fit the task above.
The time(1) command (you may need to install it -perhaps as the time package-, it should be in /usr/bin/time) accepts many arguments, including a format string (with -f or --format) which understands (among others) %M Maximum resident set size of the process during its lifetime, in Kbytes. %K Average total (data+stack+text) memory use of the process, in Kbytes. Don't confuse the /usr/bin/time command with the time bash builtin. You may need to type the full file path /usr/bin/time (to ask your shell to run the command not the builtin) or type command time or \time (thanks to Toby Speight & to Arrow for their comments). So you might try (RSS being the resident set size) /usr/bin/time -f "mem=%K RSS=%M elapsed=%E cpu.sys=%S user=%U" python script1.py You could also try /usr/bin/time --verbose python script1.py You are asking: how much RAM was used as the script was running? and this shows a misconception from your part. Application programs running on Linux (or any modern multi-process operating system) are using virtual memory, and each process (including the python process running your script) has its own virtual address space. A process don't run directly in physical RAM, but has its own virtual address space (and runs in it), and the kernel implements virtual memory by sophisticated demand-paging using lazy copy-on-write techniques and configures the MMU. The RAM is a physical device and resource used -and managed internally by the kernel- to implement virtual memory (read also about the page cache and about thrashing). You may want to spend several days understanding more about operating systems. I recommend reading Operating Systems : Three Easy Pieces which is a freely downloadable book. The RAM is used by the entire operating system (not -directly- by individual processes) and the actual pages in RAM for a given process can vary during time (and could be somehow shared with other processes). Hence the RAM consumption of a given process is not well defined since it is constantly changing (you may want its average, or its peak value, etc...), and likewise for the size of its virtual address space. You could also use (especially if your script runs for several seconds) the top(1) utility (perhaps in some other terminal), or ps(1) or pmap(1) -maybe using watch(1) to repeat that ps or pmap command. You could even use directly /proc/ (see proc(5)...) perhaps as watch cat /proc/$(pidof python)/status or /proc/$(pidof python)/stat or /proc/$(pidof python)/maps etc... But RAM usage (by the kernel for some process) is widely varying with time for a given process (and even its virtual address space is changing, e.g. by calls to mmap(2) and munmap used by ld-linux(8), dlopen(3), malloc(3) & free and many other functions needed to your Python interpreter...). You could also use strace(1) to understand the system calls done by Python for your script (so you would understand how it uses mmap & munmap and other syscalls(2)). You might restrict strace with -e trace=%memory or -e trace=memory to get only memory (i.e. virtual address space) related system calls. BTW, the tracemalloc Python feature could be also useful. I guess that you only care about virtual memory, that is about virtual address space (but not about RAM), used by the Python interpreter to run your Python script. And that is changing during execution of the process. The RSS (or the maximal peak size of the virtual address space) could actually be more useful to know. See also LinuxAteMyRAM.
Unix command to tell how much RAM was used during program runtime? [duplicate]
1,377,261,966,000
I'm running BOINC on my old netbook, which only has 2 GB of RAM onboard, which isn't enough for some tasks to run. As in, they refuse to, seeing how low on RAM the device is. I have zRAM with backing_dev and zstd algorithm enabled, so in reality, lack of memory is never an issue, and in especially tough cases I can always just use systemd-run --scope -p (I have successfully ran programs that demanded +16 GB of RAM using this) How can I make BOINC think that my laptop has more than 2 GB of RAM installed, so that I could run those demanding tasks?
After some thinking, I did this: Started with nano /proc/meminfo Changed MemTotal, MemFree, MemAvailable, SwapTotal and SwapFree to desired values and saved to ~./meminfo Gave the user boinc password sudo passwd boinc and shell -- sudo nano /etc/passwd , found the line boinc:x:129:141:BOINC core client,,,:/var/lib/boinc-client:/usr/sbin/nologin and changed the /usr/sbin/nologin part to /bin/bash Then I faked RAM info using examples from here Recover from faking /proc/meminfo unshare -m bash #unshares mount spaces, for specific program "bash" only (and for whatever you want to launch from it) mount --bind ~./meminfo /proc/meminfo #substitutes real meminfo data with fake one and confirmed with free that it worked total used free shared buff/cache available Mem: 2321456 21456 2300000 0 0 2300000 Swap: 5000000 1000000 4000000 Then switched to user su - boinc and just launched the program with boinc --check_all_logins --redirectio --dir /var/lib/boinc-client BOINC Manager can be launched then as usual Total success, tasks which previously refused to run, started to download and then ran with no complications
How can I fake the amount of installed RAM for a specific program in Linux?
1,377,261,966,000
Is there any way to know the size of L1, L2, L3 caches and RAM in Linux?
If you have lshw installed: $ sudo lshw -C memory Example $ sudo lshw -C memory ... *-cache:0 description: L1 cache physical id: a slot: Internal L1 Cache size: 32KiB capacity: 32KiB capabilities: asynchronous internal write-through data *-cache:1 description: L2 cache physical id: b slot: Internal L2 Cache size: 256KiB capacity: 256KiB capabilities: burst internal write-through unified *-cache:2 description: L3 cache physical id: c slot: Internal L3 Cache size: 3MiB capacity: 8MiB capabilities: burst internal write-back *-memory description: System Memory physical id: 2a slot: System board or motherboard size: 8GiB *-bank:0 description: SODIMM DDR3 Synchronous 1334 MHz (0.7 ns) product: M471B5273CH0-CH9 vendor: Samsung physical id: 0 serial: 67010644 slot: DIMM 1 size: 4GiB width: 64 bits clock: 1334MHz (0.7ns) *-bank:1 description: SODIMM DDR3 Synchronous 1334 MHz (0.7 ns) product: 16JTF51264HZ-1G4H1 vendor: Micron Technology physical id: 1 serial: 3749C127 slot: DIMM 2 size: 4GiB width: 64 bits clock: 1334MHz (0.7ns)
Is there any way to know the size of L1, L2, L3 cache and RAM in Linux? [closed]
1,377,261,966,000
Possible Duplicate: Can I identify my RAM without shutting down linux? I'd like to know the type, size, and model. But I'd like to avoid having to shut down and open the machine.
Check out this How do I detect the RAM memory chip specification from within a Linux machine question. This tool might help: http://www.cyberciti.biz/faq/check-ram-speed-linux/ $ sudo dmidecode --type 17 | more Sample output: # dmidecode 2.9 SMBIOS 2.4 present. Handle 0x0018, DMI type 17, 27 bytes Memory Device Array Handle: 0x0017 Error Information Handle: Not Provided Total Width: 64 bits Data Width: 64 bits Size: 2048 MB Form Factor: DIMM Set: None Locator: J6H1 Bank Locator: CHAN A DIMM 0 Type: DDR2 Type Detail: Synchronous Speed: 800 MHz (1.2 ns) Manufacturer: 0x2CFFFFFFFFFFFFFF Serial Number: 0x00000000 Asset Tag: Unknown Part Number: 0x5A494F4E203830302D3247422D413131382D Handle 0x001A, DMI type 17, 27 bytes Memory Device Array Handle: 0x0017 Error Information Handle: Not Provided Total Width: Unknown Data Width: Unknown Size: No Module Installed Form Factor: DIMM Set: None Locator: J6H2 Bank Locator: CHAN A DIMM 1 Type: DDR2 Type Detail: None Speed: Unknown Manufacturer: NO DIMM Serial Number: NO DIMM Asset Tag: NO DIMM Part Number: NO DIMM Alternatively, both newegg.com and crucial.com among other sites have memory upgrade advisors/scanners that I've used regularly under Windows. Some of them were web-based at some point, so you could try that, or if you could possibly boot into Windows (even if temporarily) it might help. Not sure what the results would be under a Windows VM, and unfortunately I am currently running Linux in a VM under Windows 7, so can't reliably test for this myself. I do realize that this doesn't give you necessarily exactly what you asked for .. but perhaps it will be of use none-the-less.
How to find information about my RAM? [duplicate]
1,377,261,966,000
I'm planning on getting some ECC RAM to replace the non-ECC RAM I currently have installed on my Asus M5A97 Pro motherboard (AMD 970 chipset, FX-6100 CPU). After I install the RAM, how do I tell whether the ECC feature of the RAM is working properly? I thought about dmidecode --type memory which currently prints among else for each RAM stick: Error Information Handle: Not Provided Total Width: 64 bits Data Width: 64 bits (For one, I would expect with 1 bit of ECC per byte the data width to remain 64 bits but the total width to read 72 bits.) Can that be used for determining whether ECC is operative? Or is dmidecode too low level for that? What else could I use (except waiting and seeing if an ECC error shows up in the logs, which would indicate it's working but not that it isn't working)? Update: I later thought of edac-utils. Installing them, I get Not enabling Memory Error Detection and Correction since EDAC_DRIVER is not set. That gave me edac-util and edac-ctl executables. Can one of those be used for this purpose?
It appears that there is no surefire way to tell, however various approaches can get you some sort of answer. Apparently you pretty much have to try the different ones until you find one that tells you ECC is working. In my case memtest86+ 4.20 couldn't be coaxed into realizing it was dealing with ECC RAM; even if I configured it for ECC On, it still reported ECC: Disabled on the IMC line. I haven't yet tried with a newer version. However (possibly after installing edac-utils, unfortunately I did both essentially at the same time), Linux reports in the boot logs (interspersed with some other entries): [ 4.867198] EDAC MC: Ver: 2.1.0 ... [ 4.874374] MCE: In-kernel MCE decoding enabled. [ 4.875414] AMD64 EDAC driver v3.4.0 [ 4.875438] EDAC amd64: DRAM ECC enabled. ... [ 4.875542] EDAC amd64: CS0: Unbuffered DDR3 RAM [ 4.875545] EDAC amd64: CS1: Unbuffered DDR3 RAM [ 4.875546] EDAC amd64: CS2: Unbuffered DDR3 RAM [ 4.875548] EDAC amd64: CS3: Unbuffered DDR3 RAM which is a pretty good indication. Manually doing /etc/init.d/edac restart does not create similar log entries, and looking at an older log from a few reboots ago, I see: [ 13.886688] EDAC MC: Ver: 2.1.0 [ 13.890389] MCE: In-kernel MCE decoding enabled. [ 13.891082] AMD64 EDAC driver v3.4.0 [ 13.891107] EDAC amd64: DRAM ECC disabled. [ 13.891116] EDAC amd64: ECC disabled in the BIOS or no ECC capability, module will not load. [ 13.891117] Either enable ECC checking or force module loading by setting 'ecc_enable_override'. [ 13.891118] (Note that use of the override may cause unknown side effects.) dmidecode --type memory also gives two pretty strong indications: the physical memory array's "error correction type" property (which however for some reason showed the same on non-ECC RAM, so this may be related to the motherboard's support rather than the memory's capabilities), Handle 0x0026, DMI type 16, 23 bytes Physical Memory Array Location: System Board Or Motherboard Use: System Memory Error Correction Type: Multi-bit ECC and each memory device's total width and data width, respectively (the additional bits being those used for the ECC): Handle 0x0028, DMI type 17, 34 bytes Memory Device Array Handle: 0x0026 Error Information Handle: Not Provided Total Width: 72 bits Data Width: 64 bits
How to tell whether RAM ECC is working?
1,377,261,966,000
I have 32 GB of memory in my PC. This is more than enough for a linux OS. Is there an easy to use version of Linux (Ubuntu preferably) that can be booted via optical or USB disk and be run completely within RAM? I know a live disc can be booted with a hard disk, but stuff still runs off the disc and this takes a while to load. I'd like everything loaded into RAM and then run from there, completely volatile. Any files I need to create would be saved to a USB disk. I'm aware of http://en.wikipedia.org/wiki/List_of_Linux_distributions_that_run_from_RAM but these all depend on a little bit of RAM. I'd prefer something like Ubuntu instead of these light versions.
Ubuntu can run on RAM, but it requires some manual changes: https://wiki.ubuntu.com/BootToRAM
Is there a linux OS that can be loaded entirely into RAM?
1,377,261,966,000
I have been tasked with running Linux as an operating system on an embedded device. The target has an x86 processor and has 8 GB CompactFlash device for storage. I have managed to use buildroot to create the kernel image and cross compilation tools. I have partitioned the CF device into a small FAT partition where the kernel image resides as well as syslinux boot configuration and an ext3 file system where I have decompressed the root file system generated by buildroot to. The system boots successfully using syslinux by setting the root directory to the CF ext3 partition where my buildroot file system is located. My question is centred around the need for robustness in the face of immediate (and frequent) power loss as it is crucial for the device to boot successfully after power outages. I have read that mounting the root file system as read only is a way of ensuring data integrity. Is this a sensible way for me to proceed? I have also read about the possibility of loading the root file system into RAM to achieve the same thing but as yet do not know how to do so. Is there a preferred way of achieving this goal and if so what is the best way for me to proceed?
New answer (2015-03-22) (Note: This answer is simpler than previous, but not more secure. My first answer is stronger because you could keep files read-only by fs mount options before permission flags. So forcing to write a files without permission to write won't work at all.) Yes, under Debian, there is a package: fsprotect (homepage). It use aufs (by default, but could use another unionfs tool) to permit live session changes but in RAM by default, so everything is forgotten at reboot. You could install them by running simply: apt-get install fsprotect Once done, from online doc: After that: Edit /boot/grub/menu.lst or /etc/default/grub2 or /etc/lilo.conf and add "fsprotect=1G" to kernel parameters. Modify 1G as needed. Apply changes (i.e. run update-grub) Edit /etc/default/fsprotect if you want to protect filesystems other than /. reboot You may also want to password protect the grub bootloader or forbid any changes to it. From there, if some file is protected against changes, for sample by chmod ugo-w myfile if you use for sample vi myfile and try to write on it with command :w!, this will work and your myfile became changed. You may reboot in order to retrieve unmodified myfile. That's not even possible with my following first solution: Old (first) answer: Yes, it is a strong solution, but powerfull! Making r/o useable You have to mount some directories in rw, like /var, /etc and maybe /home. This could by done using aufs or unionfs. I like this another way, using /dev/shm and mount --bind: cp -a /var /dev/shm/ mount --bind /dev/shm/var /var You could before, move all directories who have not to change in normal operation in a static-var, than create symlinks in /var: mkdir /static-var mkdir /static-var/cache mkdir /static-var/lib mv /var/lib/dpkg /static-var/lib/dpkg ln -s /static-var/lib/dpkg /var/lib/dpkg mv /var/cache/apt /static-var/cache/apt ln -s /static-var/cache/apt /var/cache/apt ... # an so on So when remounting in ro, copying /var in /dev/shm won't take too much space as most files are moved to /static-var and only symlinks are to be copied in ram. The better way to do this finely is to make a full power-cycle, one day of full work and finely run a command like: find / -type f -o -type f -mtime -1 So you will see which files needs to be located on read-write partition. Logging As in this host no writeable static memory exist, in order to store history and other logs, you have to config a remote syslog server. echo >/etc/syslog.conf '*.* @mySyslogServer.localdomain' In this way, if your system break for any reason, everything before is logged. Upgrading When running whith some mount --bind in use, for doing such an upgrade while system is in use (whithout the need of running init 1, for reducing down-time), the simplier way is to re-build a clean root, able to do the upgrade: After remounting '/' in read-write mode: mount -o remount,rw / for mpnt in /{,proc,sys,dev{,/pts}};do mount --bind $mnpt /$mnt$mpnt; done chroot /mnt apt-get update && apt-get dist-upgrade exit umount /mnt/{dev{/pts,},proc,sys,} sync mount -o remount,ro / And now: shutdown -r now
Is using a read only root file system a good idea for embedded setup?
1,377,261,966,000
How to be sure that a tmpfs filesystem can only deal with physical and it's not using a swap partition on disk? Since I have a slow HDD and a fast RAM, I would like, at least, giving higher priority to the RAM usage for swap and tmpfs or disabling the disk usage for tmpfs related mount points.
use ramfs instead of tmpfs. ramfs is a ramdisk (no swap) tmpfs can be both in your /etc/fstab: none /path/to/location ramfs defaults,size=512M 0 0 edit the size parameter to whatever you like but be careful not to exceed your actual amount of ram. NOTE: the use of a ramfs instead of tmpfs is not something i would recommend. you will find yourself experiencing stability issues if something happens and you write a ton of data to your ramdisk. you can NOT unallocate ram from a ramfs. once your ramdisk (all of your ram) is full your system will seize up. ram is volatile memory, meaning once it looses power all data is gone. so if your ramdisk fills up your ram and you crash you will never see what was on your ram disk again. unlike ramfs, tmpfs limits its size.
How to make tmpfs to use only the physical RAM and not the swap?
1,377,261,966,000
I have a desktop system where Centos 7 is installed. It has 4 core and 12 GB memory. In order to find memory information I use free -h command. I have one confusion. [user@xyz-hi ~]$ free -h total used free shared buff/cache available Mem: 11G 4.6G 231M 94M 6.8G 6.6G Swap: 3.9G 104M 3.8G In total column, It is saying that total in 11GB (that's correct), in last column available, it is saying that 6.6GB and used is 4.6G. If used memory is 4.6GB then remaining should be 6.4 GB (11-4.6=6.4). What is correct interpretation of above output What is the difference between total and available and free memory? Am I out of memory is above case if I need 1 GB more for some new application?
man free command solve my problem. DESCRIPTION free displays the total amount of free and used physical and swap mem‐ ory in the system, as well as the buffers and caches used by the ker‐ nel. The information is gathered by parsing /proc/meminfo. The dis‐ played columns are: total Total installed memory (MemTotal and SwapTotal in /proc/meminfo) used Used memory (calculated as total - free - buffers - cache) free Unused memory (MemFree and SwapFree in /proc/meminfo) shared Memory used (mostly) by tmpfs (Shmem in /proc/meminfo, available on kernels 2.6.32, displayed as zero if not available) buffers Memory used by kernel buffers (Buffers in /proc/meminfo) cache Memory used by the page cache and slabs (Cached and Slab in /proc/meminfo) buff/cache Sum of buffers and cache available Estimation of how much memory is available for starting new applications, without swapping. Unlike the data provided by the cache or free fields, this field takes into account page cache and also that not all reclaimable memory slabs will be reclaimed due to items being in use (MemAvailable in /proc/meminfo, avail‐ able on kernels 3.14, emulated on kernels 2.6.27+, otherwise the same as free)
What is difference between total and free memory
1,377,261,966,000
My server runs out of memory even though there is swap available. Why? I can reproduce it this way: eat_20GB_RAM() { perl -e '$a="c"x10000000000;print "OK\n";sleep 10000'; } export -f eat_20GB_RAM parallel -j0 eat_20GB_RAM ::: {1..25} & When that stabilizes (i.e. all processes reach sleep) I run a few more: parallel --delay 5 -j0 eat_20GB_RAM ::: {1..25} & When that stabilizes (i.e. all processes reach sleep) around 800 GB RAM/swap is used: $ free -m total used free shared buff/cache available Mem: 515966 440676 74514 1 775 73392 Swap: 1256720 341124 915596 When I run a few more: parallel --delay 15 -j0 eat_20GB_RAM ::: {1..50} & I start to get: Out of memory! even though there is clearly swap available. $ free total used free shared buff/cache available Mem: 528349276 518336524 7675784 14128 2336968 7316984 Swap: 1286882284 1017746244 269136040 Why? $ cat /proc/meminfo MemTotal: 528349276 kB MemFree: 7647352 kB MemAvailable: 7281164 kB Buffers: 70616 kB Cached: 1503044 kB SwapCached: 10404 kB Active: 476833404 kB Inactive: 20837620 kB Active(anon): 476445828 kB Inactive(anon): 19673864 kB Active(file): 387576 kB Inactive(file): 1163756 kB Unevictable: 18776 kB Mlocked: 18776 kB SwapTotal: 1286882284 kB SwapFree: 269134804 kB Dirty: 0 kB Writeback: 0 kB AnonPages: 496106244 kB Mapped: 190524 kB Shmem: 14128 kB KReclaimable: 753204 kB Slab: 15772584 kB SReclaimable: 753204 kB SUnreclaim: 15019380 kB KernelStack: 46640 kB PageTables: 3081488 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 1551056920 kB Committed_AS: 1549560424 kB VmallocTotal: 34359738367 kB VmallocUsed: 1682132 kB VmallocChunk: 0 kB Percpu: 202752 kB HardwareCorrupted: 0 kB AnonHugePages: 0 kB ShmemHugePages: 0 kB ShmemPmdMapped: 0 kB FileHugePages: 0 kB FilePmdMapped: 0 kB CmaTotal: 0 kB CmaFree: 0 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB Hugetlb: 0 kB DirectMap4k: 12251620 kB DirectMap2M: 522496000 kB DirectMap1G: 3145728 kB
In /proc/meminfo you find: CommitLimit: 1551056920 kB Committed_AS: 1549560424 kB So you are at the commit limit. If you have disabled overcommiting of memory (to avoid the OOM-killer) by: echo 2 > /proc/sys/vm/overcommit_memory Then the commit limit is computed as: 2 - Don't overcommit. The total address space commit for the system is not permitted to exceed swap + a configurable amount (default is 50%) of physical RAM. Depending on the amount you use, in most situations this means a process will not be killed while accessing pages but will receive errors on memory allocation as appropriate. (From: https://www.kernel.org/doc/Documentation/vm/overcommit-accounting) You can use the full memory by: echo 100 > /proc/sys/vm/overcommit_ratio Then you will get out-of-memory when physical RAM and swap is all reserved. The name overcommit_ratio is in this case a bit misleading: You are not overcommitting anything. Even with this setup you may see out-of-memory before swap is exhausted. malloc.c: #include <stdio.h> #include <malloc.h> #include <stdlib.h> #include <unistd.h> void main(int argc, char **argv) { long bytes, sleep_sec; if(argc != 3) { printf("Usage: malloc bytes sleep_sec\n"); exit(1); } sscanf(argv[1],"%ld",&bytes); sscanf(argv[2],"%ld",&sleep_sec); printf("Bytes: %ld Sleep: %ld\n",bytes,sleep_sec); if(malloc(bytes)) { sleep(sleep_sec); } else { printf("Out of memory\n"); exit(1); } } Compile as: gcc -o malloc malloc.c Run as (reserve 1 GB for 10 seconds): ./malloc 1073741824 10 If you run this you may see OOM even though there is swap free: # Plenty of ram+swap free before we start $ free -m total used free shared buff/cache available Mem: 515966 2824 512361 16 780 511234 Swap: 1256720 0 1256720 # Reserve 1.8 TB $ ./malloc 1800000000000 100 & Bytes: 1800000000000 Sleep: 100 # It looks as if there is plenty of ram+swap free $ free -m total used free shared buff/cache available Mem: 515966 2824 512361 16 780 511234 Swap: 1256720 0 1256720 # But there isn't: It is all reserved (just not used yet) $ cat /proc/meminfo |grep omm CommitLimit: 1815231560 kB Committed_AS: 1761680484 kB # Thus this fails (as you would expect) $ ./malloc 180000000000 100 Bytes: 180000000000 Sleep: 100 Out of memory So while free in practice often will do The Right Thing, looking at CommitLimit and Committed_AS seems to be more bullet-proof.
Out of memory, but swap available
1,377,261,966,000
How to prevent chrome to take more than for example 4GB of ram. From time to time he decides to take something like 7GB (with 8GB RAM total) and makes my computer unusable. Do you have any help. PS: I even didn't have more than 10 tabs opened. Edit: maybe I did ... something like 15. Anyway I want chrome to freeze or shutdown not to freeze the whole system.
I believe you would want to use something like cgroups to limit resource usage for a individual process. So you might want to do something like this except with cgcreate -g memory,cpu:chromegroup cgset -r memory.limit_in_bytes=2048 chromegroup to create chromegroup and restrict the memory usage for the group to 2048 bytes cgclassify -g memory,cpu:chromegroup $(pidof chrome) to move the current chrome processes into the group and restrict their memory usage to the set limit or just launch chrome within the group like cgexec -g memory,cpu:chromegroup chrome However, it's pretty insane that chrome is using that much memory in the first place. Try purging reinstalling / recompiling first to see if that doesn't fix the issue, because it really should not be using that much memory to begin with, and this solution is only a band-aid over the real problem.
Chrome eats all RAM and freezes system
1,377,261,966,000
I have done several searches and I cannot find anything on Google about why but arch has allocated 7.7 gigs to ram and 7.9 to swap. I only have 8 gigs ram. it allocated more ram to swap than regular How could I change the allocations? output of cat /proc/meminfo: MemTotal: 8091960 kB MemFree: 4925736 kB MemAvailable: 6131188 kB Buffers: 268936 kB Cached: 1219460 kB SwapCached: 0 kB Active: 1527516 kB Inactive: 1301140 kB Active(anon): 768904 kB Inactive(anon): 711440 kB Active(file): 758612 kB Inactive(file): 589700 kB Unevictable: 32 kB Mlocked: 32 kB SwapTotal: 8300540 kB SwapFree: 8300540 kB Dirty: 1960 kB Writeback: 0 kB AnonPages: 1306968 kB Mapped: 382800 kB Shmem: 140100 kB Slab: 197964 kB SReclaimable: 163104 kB SUnreclaim: 34860 kB KernelStack: 6864 kB PageTables: 29200 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 12346520 kB Committed_AS: 3927808 kB VmallocTotal: 34359738367 kB VmallocUsed: 0 kB VmallocChunk: 0 kB HardwareCorrupted: 0 kB AnonHugePages: 186368 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 584316 kB DirectMap2M: 7716864 kB DirectMap1G: 0 kB
What this is telling you is that you have 16GB of virtual memory. Virtual memory is the total of physical RAM and swap space added up. It's a way of letting your system run more programs than it physically has the space for. How much swap should be allocated to a machine is a complicated and opinionated question; ask 2 people and get 3 answers :-) Your setup isn't bad, and I wouldn't recommend making changes to it until you learn a lot more about how virtual memory works and how to tune it. It's a good starting point.
Arch Linux thinks I have about 16 gigs of ram when I only have 8
1,377,261,966,000
kernel: EDAC MC0: UE page 0x0, offset 0x0, grain 0, row 7, labels ":": i3200 UE All of a sudden today, our CentOS release 6.4 (Final) system started throwing EDAC errors. I rebooted, and the errors stopped. I have been searching for answers, but they fall into two camps, memory or a chipset. I would like some advice on where to search further to narrow this down to chipset or memory.
What you're experiencing is an Error Detection and Correction event. Given the error includes this bit: MC0 you're experiencing a memory error. This message is telling you where specifically you're experiencing the error. MC0 means the RAM in the first socket (#0). The rest of that message is telling you specifically within that RAM DIMM the error occurred. Given you're getting just one, I would continue to monitor it but do nothing for the time being. If it continues then you most likely are experiencing a failing memory module. You could also try to test it more thoroughly using memtest86+. This previous question titled: How to blacklist a correct bad RAM sector according to MemTest86+ error imdocation? will show you how to blacklist the memory if you're interested in that as well.
Does kernel: EDAC MC0: UE page 0x0 point to bad memory, a driver, or something else?
1,377,261,966,000
I have an Intel Atom D2700 (Synology NAS DS412+) with 4GB RAM running kernel 3.2.30 x86_64. This unit has a single DIMM slot. One thing I, and other's have found, is that when adding a 4GB DIMM versus a 2GB DIMM, the unit experiences significantly higher CPU usage when under load (for example, 'heavy' Java applications like Minecraft servers, or Plex transcoding, etc). Many users have found that when they drop back to 2GB all of these high load issues disappear. Is this something specific to Linux that may cause this? Or is this an issue with the Atom itself?
Have a look at the Intel Atom® processor D2000 and N2000 series Datasheet, vol. 1. Note pages 32-33 and table 3-24. The takeaway from that is while your processor and memory controller support 4 GB of total RAM, they only support it in 2 GB chunks, in 2 GB per slot. Since your 412+ only has one slot, 2 GB is your max RAM. Anything above that is likely to be unpredictable.
Processor usage increases with 4GB RAM installed
1,377,261,966,000
I want to create a fixed size Linux ramdisk which never swaps to disk. Note that my question is not "why" I want to do this (let say, for example, that it's for an educative purpose or for research): the question is how to do it. As I understand ramfs cannot be limited in size, so it doesn't fit my requirement of having a fixed size ramdisk. It also seems that tmpfs may be swapped to disk. So it doesn't fit my requirement of never swapping to disk. How can you create a fixed size Linux ramdisk which does never swap to disk? Is it possible, for example, to create tmpfs inside a ramfs (would such a solution fit both my requirements) and if so how? Note that performances are not an issue and the ramdisk getting full and triggering "disk full" errors ain't an issue either.
This is just a thought and has more than one downside, but it might be usable enough anyway. How about creating an image file and a filesystem inside it on top of ramfs, then mount the image as a loop device? That way you could limit the size of ramdisk by simply limiting the image file size. For example: $ mkdir -p /ram/{ram,loop} $ mount -t ramfs none /ram/ram $ dd if=/dev/zero of=/ram/ram/image bs=2M count=1 1+0 records in 1+0 records out 2097152 bytes (2.1 MB) copied, 0.00372456 s, 563 MB/s $ mke2fs /ram/ram/image mke2fs 1.42 (29-Nov-2011) /ram/ram/image is not a block special device. Proceed anyway? (y,n) y Filesystem label= OS type: Linux Block size=1024 (log=0) Fragment size=1024 (log=0) Stride=0 blocks, Stripe width=0 blocks 256 inodes, 2048 blocks 102 blocks (4.98%) reserved for the super user First data block=1 Maximum filesystem blocks=2097152 1 block group 8192 blocks per group, 8192 fragments per group 256 inodes per group Allocating group tables: done Writing inode tables: done Writing superblocks and filesystem accounting information: done $ mount -o loop /ram/ram/image /ram/loop $ dd if=/dev/zero of=/ram/loop/test bs=1M count=5 dd: writing `/ram/loop/test': No space left on device 2+0 records in 1+0 records out 2027520 bytes (2.0 MB) copied, 0.00853692 s, 238 MB/s $ ls -l /ram/loop total 2001 drwx------ 2 root root 12288 Jan 27 17:12 lost+found -rw-r--r-- 1 root root 2027520 Jan 27 17:13 test In the (somewhat too long) example above the image file is created to be 2 megabytes and when trying to write more than 2 megabytes on it, write simply fails because the filesystem is full. One obvious downsize for all this is of course that there is much added complexity, but at least for academic purposes this should suffice.
How to create a fixed size Linux ramdisk which does never swap to disk?
1,377,261,966,000
My problems were caused by a faulty memory module and quite possibly a broken kernel binary. I just now booted my PC with basically brand new hardware. I've been running Debian 6.0 AMD64 before, and no change there (literally; I just unplugged the hard disks from the old motherboard and reconnected them to the new one), but found something curious: I have physically installed 4 x 8 GB of RAM UEFI/BIOS setup reports 16383 MB of RAM Linux free -m reports 2985 MB of RAM 2985 MB seems too close to the magical 3 GB mark for it to be purely coincidence, but uname -r prints 2.6.32-5-amd64; clearly a 64-bit kernel, which is all that has ever been installed on the system drive I'm using. The new motherboard is an Asus M5A97 Pro, which has four DDR3 slots supposedly supporting 8 GB modules. The memory modules themselves are identical, four Corsair XMS3 PC12800 8 GB, purchased together. I haven't looked around the UEFI setup in detail, but did browse through it and saw nothing that seemed like it would need changing to enable large amounts of RAM. Edit: Further confirmation that I really am running 64-bit: # file `which free` /usr/bin/free: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.18, stripped # What's up with this, and what can I do about it? Edit 2: dmesg, dmidecode and meminfo, as requested. I don't have physical access to the system right now, so will have to wait until tonight to pull out some modules and see what that does. (Note that dmidecode reports 3 x 8GB plus one empty DIMM slot. Also note the MTRR mismatch message from the kernel, leading to a loss of 13 GB, which at least adds up with what the motherboard itself is reporting.) # dmidecode --type memory # dmidecode 2.9 SMBIOS 2.7 present. Handle 0x0026, DMI type 16, 23 bytes Physical Memory Array Location: System Board Or Motherboard Use: System Memory Error Correction Type: Multi-bit ECC Maximum Capacity: 32 GB Error Information Handle: Not Provided Number Of Devices: 4 Handle 0x0028, DMI type 17, 34 bytes Memory Device Array Handle: 0x0026 Error Information Handle: Not Provided Total Width: 64 bits Data Width: 64 bits Size: 8192 MB Form Factor: DIMM Set: None Locator: DIMM0 Bank Locator: BANK0 Type: <OUT OF SPEC> Type Detail: Synchronous Speed: 1333 MHz (0.8 ns) Manufacturer: Manufacturer0 Serial Number: SerNum0 Asset Tag: AssetTagNum0 Part Number: Array1_PartNumber0 Handle 0x002A, DMI type 17, 34 bytes Memory Device Array Handle: 0x0026 Error Information Handle: Not Provided Total Width: 64 bits Data Width: 64 bits Size: 8192 MB Form Factor: DIMM Set: None Locator: DIMM1 Bank Locator: BANK1 Type: <OUT OF SPEC> Type Detail: Synchronous Speed: 1333 MHz (0.8 ns) Manufacturer: Manufacturer1 Serial Number: SerNum1 Asset Tag: AssetTagNum1 Part Number: Array1_PartNumber1 Handle 0x002C, DMI type 17, 34 bytes Memory Device Array Handle: 0x0026 Error Information Handle: Not Provided Total Width: 64 bits Data Width: 64 bits Size: 8192 MB Form Factor: DIMM Set: None Locator: DIMM2 Bank Locator: BANK2 Type: <OUT OF SPEC> Type Detail: Synchronous Speed: 1333 MHz (0.8 ns) Manufacturer: Manufacturer2 Serial Number: SerNum2 Asset Tag: AssetTagNum2 Part Number: Array1_PartNumber2 Handle 0x002E, DMI type 17, 34 bytes Memory Device Array Handle: 0x0026 Error Information Handle: Not Provided Total Width: Unknown Data Width: 64 bits Size: No Module Installed Form Factor: DIMM Set: None Locator: DIMM3 Bank Locator: BANK3 Type: Unknown Type Detail: Synchronous Speed: Unknown Manufacturer: Manufacturer3 Serial Number: SerNum3 Asset Tag: AssetTagNum3 Part Number: Array1_PartNumber3 # ====================================================================== # cat /proc/meminfo MemTotal: 3056820 kB MemFree: 1470820 kB Buffers: 390204 kB Cached: 194660 kB SwapCached: 0 kB Active: 488024 kB Inactive: 419096 kB Active(anon): 231112 kB Inactive(anon): 96660 kB Active(file): 256912 kB Inactive(file): 322436 kB Unevictable: 0 kB Mlocked: 0 kB SwapTotal: 0 kB SwapFree: 0 kB Dirty: 8 kB Writeback: 0 kB AnonPages: 322320 kB Mapped: 33012 kB Shmem: 5472 kB Slab: 613952 kB SReclaimable: 597404 kB SUnreclaim: 16548 kB KernelStack: 2384 kB PageTables: 19472 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 1528408 kB Committed_AS: 621464 kB VmallocTotal: 34359738367 kB VmallocUsed: 294484 kB VmallocChunk: 34359429080 kB HardwareCorrupted: 0 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 9216 kB DirectMap2M: 2054144 kB DirectMap1G: 1048576 kB # ====================================================================== # dmesg | grep -i memory [ 0.000000] WARNING: BIOS bug: CPU MTRRs don't cover all of memory, losing 13295MB of RAM. [ 0.000000] WARNING: at /tmp/buildd/linux-2.6-2.6.32/debian/build/source_amd64_none/arch/x86/kernel/cpu/mtrr/cleanup.c:1092 mtrr_trim_uncached_memory+0x2e6/0x311() [ 0.000000] [<ffffffff814f7f1e>] ? mtrr_trim_uncached_memory+0x2e6/0x311 [ 0.000000] [<ffffffff814f7f1e>] ? mtrr_trim_uncached_memory+0x2e6/0x311 [ 0.000000] [<ffffffff814f7f1e>] ? mtrr_trim_uncached_memory+0x2e6/0x311 [ 0.000000] initial memory mapped : 0 - 20000000 [ 0.000000] init_memory_mapping: 0000000000000000-00000000bdf00000 [ 0.000000] PM: Registered nosave memory: 000000000009d000 - 000000000009e000 [ 0.000000] PM: Registered nosave memory: 000000000009e000 - 00000000000a0000 [ 0.000000] PM: Registered nosave memory: 00000000000a0000 - 00000000000e0000 [ 0.000000] PM: Registered nosave memory: 00000000000e0000 - 0000000000100000 [ 0.000000] PM: Registered nosave memory: 00000000bd94d000 - 00000000bd99c000 [ 0.000000] PM: Registered nosave memory: 00000000bd99c000 - 00000000bd9a6000 [ 0.000000] PM: Registered nosave memory: 00000000bd9a6000 - 00000000bdade000 [ 0.000000] PM: Registered nosave memory: 00000000bdade000 - 00000000bdaef000 [ 0.000000] PM: Registered nosave memory: 00000000bdaef000 - 00000000bdb02000 [ 0.000000] PM: Registered nosave memory: 00000000bdb02000 - 00000000bdb04000 [ 0.000000] PM: Registered nosave memory: 00000000bdb04000 - 00000000bdb0d000 [ 0.000000] PM: Registered nosave memory: 00000000bdb0d000 - 00000000bdb13000 [ 0.000000] PM: Registered nosave memory: 00000000bdb13000 - 00000000bdb75000 [ 0.000000] PM: Registered nosave memory: 00000000bdb75000 - 00000000bdd78000 [ 0.000000] Memory: 3046732k/3111936k available (3075k kernel code, 4728k absent, 60476k reserved, 1879k data, 584k init) [ 1.636730] Freeing initrd memory: 9501k freed [ 1.647370] Freeing unused kernel memory: 584k freed [ 4.876602] [TTM] Zone kernel: Available graphics memory: 1528410 kiB. [ 4.876615] [drm] radeon: 256M of VRAM memory ready [ 4.876617] [drm] radeon: 512M of GTT memory ready. [ 25.571018] VBoxDrv: dbg - g_abExecMemory=ffffffffa051d6c0 # Grepping for e820 shows a bunch of ranges, topping out with e820 update range: 00000000bdf00000 - 000000043f000000 (usable) ==> (reserved). 43f000000 is 16 GiB, bdf00000 is 3039 MiB. I do not see that being coincidental. # dmesg | grep -i e820 [ 0.000000] BIOS-e820: 0000000000000000 - 000000000009d800 (usable) [ 0.000000] BIOS-e820: 000000000009d800 - 00000000000a0000 (reserved) [ 0.000000] BIOS-e820: 00000000000e0000 - 0000000000100000 (reserved) [ 0.000000] BIOS-e820: 0000000000100000 - 00000000bd94d000 (usable) [ 0.000000] BIOS-e820: 00000000bd94d000 - 00000000bd99c000 (ACPI NVS) [ 0.000000] BIOS-e820: 00000000bd99c000 - 00000000bd9a6000 (ACPI data) [ 0.000000] BIOS-e820: 00000000bd9a6000 - 00000000bdade000 (reserved) [ 0.000000] BIOS-e820: 00000000bdade000 - 00000000bdaef000 (ACPI NVS) [ 0.000000] BIOS-e820: 00000000bdaef000 - 00000000bdb02000 (reserved) [ 0.000000] BIOS-e820: 00000000bdb02000 - 00000000bdb04000 (ACPI NVS) [ 0.000000] BIOS-e820: 00000000bdb04000 - 00000000bdb0d000 (reserved) [ 0.000000] BIOS-e820: 00000000bdb0d000 - 00000000bdb13000 (ACPI NVS) [ 0.000000] BIOS-e820: 00000000bdb13000 - 00000000bdb75000 (reserved) [ 0.000000] BIOS-e820: 00000000bdb75000 - 00000000bdd78000 (ACPI NVS) [ 0.000000] BIOS-e820: 00000000bdd78000 - 00000000bdf00000 (usable) [ 0.000000] BIOS-e820: 00000000fec00000 - 00000000fec01000 (reserved) [ 0.000000] BIOS-e820: 00000000fec10000 - 00000000fec11000 (reserved) [ 0.000000] BIOS-e820: 00000000fec20000 - 00000000fec21000 (reserved) [ 0.000000] BIOS-e820: 00000000fed00000 - 00000000fed01000 (reserved) [ 0.000000] BIOS-e820: 00000000fed61000 - 00000000fed71000 (reserved) [ 0.000000] BIOS-e820: 00000000fed80000 - 00000000fed90000 (reserved) [ 0.000000] BIOS-e820: 00000000fef00000 - 0000000100000000 (reserved) [ 0.000000] BIOS-e820: 0000000100001000 - 000000043f000000 (usable) [ 0.000000] e820 update range: 0000000000000000 - 0000000000010000 (usable) ==> (reserved) [ 0.000000] e820 update range: 00000000bdf00000 - 000000043f000000 (usable) ==> (reserved) [ 0.000000] update e820 for mtrr # EDIT 3/4 -- partial success: Upgrading the UEFI BIOS from version 0705 x64 08/23/2011 to 1007 02/10/2012 did not help: the exact same problem remained. Removing one DIMM module (I took a lucky guess at which slot was #4: the one farthest from the CPU) allowed the BIOS to detect and use the remaining 24 GB, although a three-DIMM configuration is not "recommended" according to the diagram in the user's manual. Notably, seating one of the remaining DIMMs in slot #4 still allowed it to be used, so the slot is fine. Reseating the "original" DIMM into that slot dropped me back at my starting point. Booting from the Debian 6.0.3 AMD64 installation CD into a rescue environment and checking its dmesg output shows no similar MTRR errors. Also, in that environment, with 3 x 8GB installed, 24 GB (plus or minus epsilon times pi or thereabouts; I didn't do the exact math) shows up as usable according to free. Upgrading/reinstalling the kernel (there was a minor upgrade available) seems to have fixed the MTRR issues as well. dmesg now reports 26198016 KB total, and no MTRR errors, which is in line with what I would expect with 3 x 8GB installed. free -m now reports 24114 MB total RAM, which quite frankly is close enough for me. This smells like a barfed DIMM, plus a kernel that for whatever reason was damaged; that latter may have happened during the power outage (though I must say that's an odd way for the kernel to break!). The non-working DIMM will go back to the reseller as soon as I talk to them (hopefully tomorrow). (hopefully) FINAL EDIT I RMA'd one of the two pairs of DIMMs, it was accepted by the reseller as damaged and they sent me a new pair, which seems to work just fine. So I'm now basically at where I originally intended nearly a month ago (although a large fraction of that time was not really due to the reseller), with 32 GB RAM usable; free -m reports 32194 MB total memory, and the kernel reports 34586624k RAM on initialization, both of which are well in line with my expectations.
First, if your BIOS/UEFI does not detect correctly your RAM, then your OS won't do any better. There's no need to go any further if your BIOS display incorrect information about your setup. => You probably have at least an hardware problem. EDIT: From your dmesg | grep memory, it seems that you have in fact an hardware problem, located in your embedded bios. At least, Linux has detected it and warns you about it : WARNING: BIOS bug: CPU MTRRs don't cover all of memory, losing 13295MB of RAM. It also seems that one of your 4 ram module is incorrectly recognised or inserted. You can either report it to your manufacturer, upgrade your bios and change your motherboard. There's many chance that with less RAM, you won't encounter this bug. As a side note, you may agree with this famous quote from Linus Torvalds about BIOS makers : BIOS writers are invariably totally incompetent crack-addicted monkeys Second, when your BIOS is OK with what you really have on your motherboard, you can take a look on Linux at /proc/meminfo. It's often very clear about what your linux system know and do with your memory. Here is what I have on my 64bit / 8 Gb of RAM : $ cat /proc/meminfo MemTotal: 8175652 kB MemFree: 5476336 kB Buffers: 63924 kB Cached: 1943460 kB SwapCached: 0 kB [...] About the boot process and what is used/freed by linux kernel, you can grep it from dmesg : $ dmesg | grep Memory [ 0.000000] Memory: 8157672k/8904704k available (6138k kernel code, 534168k absent, 212864k reserved, 6896k data, 988k init) EDIT : As Gilles said, with dmidecode --type memory, you can have details about your hardware configuration. It looks like this for a 4x2Gb system : $ sudo dmidecode --type memory # dmidecode 2.9 SMBIOS 2.6 present. Handle 0x0020, DMI type 16, 15 bytes Physical Memory Array Location: System Board Or Motherboard Use: System Memory Error Correction Type: None Maximum Capacity: 32 GB Error Information Handle: Not Provided Number Of Devices: 4 Handle 0x0022, DMI type 17, 28 bytes Memory Device Array Handle: 0x0020 Error Information Handle: Not Provided Total Width: 64 bits Data Width: 64 bits Size: 2048 MB [...] [This block is repeated for each module]
64-bit Linux doesn't recognize my RAM between 3 and 32 GB
1,377,261,966,000
I have a system with 8 x 16 GB DIMMs, so 128 GB total. However, the MemTotal reported by /proc/meminfo is 131927808 kB, so 131 GB My research suggests that if anything, the meminfo should add up up to less than the RAM total. Understanding /proc/meminfo file (Analyzing Memory utilization in Linux) So Google's calculator reports this sum as 131 (just divided by 1000000) https://www.google.com/search?q=131927808+kB+to+GB If you interpret the kB to mean kibibytes it is instead: 135 GB (worse!) If you make it Kibibytes to Gibibytes it's 125 Or Kilobytes to Gigabytes it's 122 Below are the details. Can anyone help me understand this discrepancy? # cat /proc/meminfo MemTotal: 131927808 kB MemFree: 3186732 kB MemAvailable: 99191856 kB Buffers: 3476036 kB Cached: 115792344 kB SwapCached: 120540 kB Active: 80544652 kB Inactive: 45017236 kB Active(anon): 28044884 kB Inactive(anon): 3127872 kB Active(file): 52499768 kB Inactive(file): 41889364 kB Unevictable: 13040 kB Mlocked: 584115752720 kB SwapTotal: 1953788 kB SwapFree: 0 kB
Memory capacity in DIMMs is measured in powers of two, so a claimed RAM capacity of “128 giga-something” is 128 GiB which is 134,217,728 kiB. /proc/meminfo also measures memory in powers of two, so the MemTotal value of 131,927,808 can be compared with 134,217,728 and is safely less. MemTotal is the total installed physical memory minus whatever is reserved by the system firmware and the kernel binary. Your boot log should contain a line of the form ... [ 0.000000] Memory: 32784756K/33435864K available (10252K kernel code, 1243K rwdata, 3324K rodata, 1584K init, 2280K bss, 651108K reserved, 0K cma-reserved) which will indicate exactly how much is reserved by the system (the “reserved” figure) and the kernel binary (the “kernel code” figure).
Discrepancy between physical RAM and /proc/meminfo
1,377,261,966,000
The following three outputs were taken essentially simultaneously: top: top - 02:54:36 up 2 days, 13:50, 3 users, load average: 0.05, 0.05, 0.09 Tasks: 181 total, 1 running, 179 sleeping, 0 stopped, 1 zombie %Cpu(s): 2.5 us, 0.8 sy, 0.0 ni, 96.6 id, 0.1 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem: 16158632 total, 11234480 used, 4924152 free, 844 buffers KiB Swap: 16777212 total, 0 used, 16777212 free, 10640832 cached free -h: total used free shared buffers cached Mem: 15G 10G 4.7G 0B 844K 10G -/+ buffers/cache: 578M 14G Swap: 15G 0B 15G htop: free and top seem to agree. In top there is 11234480 KiB used, subtracting 10640832 KiB cached gives 579.7 MiB, which is pretty close to what free reports under used +/- buffers/cache. However htop is reporting 1836 (MiB) used, which is neither here nor there as far as I can see. Where does this difference come from? htop is clearly not including the cached data, but it is still reporting more than three times the memory usage of free or top. I am aware that there are many similar questions, but I haven't come across one that explains this discrepancy (the confusion usually seems to be only the with/without cache counting). Edit: I should mention that I am running openSUSE, and I see the same kind of discrepancy in both version 12.2 and 12.3 RC1. Edit2: The included version of htop is 1.0.1. I have also compiled version 1.0.2 from source and see the same discrepancy then as well.
A complete re-write of my previous post. Got a bit curious and checked out further. In short: the reason for the difference is that openSUSE uses a patched version of top and free that adds some extra values to `cached'. A) Standard version top, free, htop, ...: Usage is calculated by reading data from /proc/meminfo: E.g.: #free: Row Column | Corresponding /proc/meminfo entry -----|--------|---------------------------------- Mem: total : MemTotal used : MemTotal - MemFree free : MemFree shared : MemShared buffers : Buffers cached : Cached -----|--------|---------------------------------- -/+ buffers/cache: used : (MemTotal - MemFree) - (Buffers + Cached) free : MemFree + (Buffers + Cached) #htop: Used U* : ((MemTotal - MemFree) - (Buffers + Cached)) / 1024 *I'm using the name Used U for memory used by User Mode. Aka Used minus (Cached + Buffers). So in reality same calculation is used. htop display the following in the memory meter: [Used U % of total | Buffers % of total | Cached % of total ] UsedU MB (MB is actually MiB.) B) Patched version The base for free and top on Debian, Fedora, openSuse is is procps-ng. However, each flavour add their own patches that might, or migh not become part of the main project. Under openSUSE we find various additions to the top/free (procps) package. The ones to take notice of here is some additional values used to represent the cache value. (I did not include these in my previous post as my system uses a "clean" procps.) B.1) Additions In /proc/meminfo we have Slab which is in-kernel data structure cache. As a sub category we find SReclaimable which is part of Slab that might be reclaimed for other use both by Kernel and User Mode. Further we have SwapCached which is memory that once was swapped out, is swapped in but also is in the swap-file. Thus if one need to swap it out again, this is already done. Lastly there is NFS_Unstable which are pages sent to the server but not yet committed to stable storage. The following values are added to cache in the openSUSE patched version: SReclaimable SwapCached NFS_Unstable (In addition there are some checks that total has to be greater then free, used has to be greater then buffers + cache etc.) B.2) Result Looking at free, as a result the following values are the same: total, used, free and buffers. The following are changed: cached and "+/- buffers". used = MemTotal - MemFree old: cached : Cached +-/buffers-used: used - (Buffers + Cached) +/-buffers-free: free + (Buffers + Cached) patched: cached : Cached + SReclaimable + SwapCached + NFS_Unstable +/-buffers-used: used - (Buffers + Cached + SReclaimable + SwapCached + NFS_Unstable) +/-buffers-free: free + (Buffers + Cached + SReclaimable + SwapCached + NFS_Unstable) The same additions are done to top. htop is unchanged and thus only aligning with older / or un-patched versions of top / free.
htop reporting much higher memory usage than free or top
1,377,261,966,000
I am using Arch Linux (5.1.8-arch1-1-ARCH) with the XFCE DE and XFWM4 WM. Things are pretty elegant and low on RAM and CPU usage. After the boot, and when the DE is loaded completely, I see 665 MiB of RAM usage. But after opening applications like Atom, Code, Firefox, Chromium, or after working in GIMP, Blender etc. the RAM usage increases, which is obvious. But after closing all the applications and left with nothing but a gnome-system-monitor, I can see that the RAM usage is 1.2 - 1.4 GiB. /proc/meminfo agrees with gnome-system-monitor, but htop gives different results all the time. The worse thing is that when I open a RAM hogging application later on, it again consumes needed memory on top of that 1.4 GiB. This is always the case. No files that could add up to megabytes are stored in the /tmp/ directory. Also, if I look for the process that's using that much RAM (from 700 MiB at start to 1.4 GiB after closing the browser!!), I see nothing. In fact I faced the same issue even on my raspberry pi running Arch ARM. The Ruby code: #!/usr/bin/ruby -w STDOUT.sync = true loop do IO.readlines(File.join(%w(/ proc meminfo))).then { |x| [x[0], x[2]] }.map { |x| x.split[1].to_i }.reduce(:-) .tap { |x| print "\e[2K\rRAM Usage:".ljust(20), "#{x / 1024.0} MiB".ljust(24), "#{(x / 1000.0)} MB" } Kernel.sleep(0.1) end The cat /proc/meminfo command has the following output: MemTotal: 3851796 kB MemFree: 1135680 kB MemAvailable: 2055708 kB Buffers: 1048 kB Cached: 1463960 kB SwapCached: 284 kB Active: 1622148 kB Inactive: 660952 kB Active(anon): 923580 kB Inactive(anon): 269360 kB Active(file): 698568 kB Inactive(file): 391592 kB Unevictable: 107012 kB Mlocked: 32 kB SwapTotal: 3978216 kB SwapFree: 3966696 kB Dirty: 280 kB Writeback: 0 kB AnonPages: 924844 kB Mapped: 563732 kB Shmem: 374848 kB KReclaimable: 74972 kB Slab: 130016 kB SReclaimable: 74972 kB SUnreclaim: 55044 kB KernelStack: 8000 kB PageTables: 14700 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 5904112 kB Committed_AS: 3320548 kB VmallocTotal: 34359738367 kB VmallocUsed: 0 kB VmallocChunk: 0 kB Percpu: 1456 kB HardwareCorrupted: 0 kB AnonHugePages: 0 kB ShmemHugePages: 0 kB ShmemPmdMapped: 0 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB Hugetlb: 0 kB DirectMap4k: 226736 kB DirectMap2M: 3778560 kB DirectMap1G: 0 kB Firstly you noticed htop never agrees. I don't know much about that. And secondly you can see the xfdesktop uses 44 MiB, and some other processes uses some of the memory, the kernel uses ~150 MiB, and apart from that, why am I seeing 1.5 GiB RAM is being used? Does this really affect the performance of the system?
Unused RAM is wasted RAM. The Linux kernel has advanced memory management features and tries to avoid putting a burden on the bottleneck in your system, your hard drive/SSD. It tries to cache files in memory. The memory management system works in complex ways, better performance is the goal. You can see what it is doing by inspecting /proc/meminfo. cat /proc/meminfo You can reclaim this cached memory, using "drop_caches". However, note the documentation says "use outside of a testing or debugging environment is not recommended", simply because "it may cost a significant amount of I/O and CPU to recreate the dropped objects" when they are needed again :-). Clear PageCache only: # sync; echo 1 > /proc/sys/vm/drop_caches Clear dentries and inodes: # sync; echo 2 > /proc/sys/vm/drop_caches Clear PageCache, dentries and inodes: # sync; echo 3 > /proc/sys/vm/drop_caches Note that sync will flush the file system buffer to ensure all data has been written. From the kernel docs: Page cache The physical memory is volatile and the common case for getting data into the memory is to read it from files. Whenever a file is read, the data is put into the page cache to avoid expensive disk access on the subsequent reads. Similarly, when one writes to a file, the data is placed in the page cache and eventually gets into the backing storage device. The written pages are marked as dirty and when Linux decides to reuse them for other purposes, it makes sure to synchronize the file contents on the device with the updated data. Reclaim Throughout the system lifetime, a physical page can be used for storing different types of data. It can be kernel internal data structures, DMA’able buffers for device drivers use, data read from a filesystem, memory allocated by user space processes etc. Depending on the page usage it is treated differently by the Linux memory management. The pages that can be freed at any time, either because they cache the data available elsewhere, for instance, on a hard disk, or because they can be swapped out, again, to the hard disk, are called reclaimable. The most notable categories of the reclaimable pages are page cache and anonymous memory. In most cases, the pages holding internal kernel data and used as DMA buffers cannot be repurposed, and they remain pinned until freed by their user. Such pages are called unreclaimable. However, in certain circumstances, even pages occupied with kernel data structures can be reclaimed. For instance, in-memory caches of filesystem metadata can be re-read from the storage device and therefore it is possible to discard them from the main memory when system is under memory pressure. The process of freeing the reclaimable physical memory pages and repurposing them is called (surprise!) reclaim. Linux can reclaim pages either asynchronously or synchronously, depending on the state of the system. When the system is not loaded, most of the memory is free and allocation requests will be satisfied immediately from the free pages supply. As the load increases, the amount of the free pages goes down and when it reaches a certain threshold (high watermark), an allocation request will awaken the kswapd daemon. It will asynchronously scan memory pages and either just free them if the data they contain is available elsewhere, or evict to the backing storage device (remember those dirty pages?). As memory usage increases even more and reaches another threshold - min watermark - an allocation will trigger direct reclaim. In this case allocation is stalled until enough memory pages are reclaimed to satisfy the request. Memory Leaks Now, some programs can have "memory leaks", that is, they "forget" to free up memory they no longer use. You can see this if you leave a program running for some time, its memory usage constantly increases, when you close it, the memory is never freed. Now, programmers try to avoid memory leaks, of course, but programs can have some. The way to reclaim this memory is a reboot.
Why does my system use more RAM after an hour of usage?
1,377,261,966,000
I am currently having some issues running Java. It won't start because of heap issues. But I have more than 9 GB Ram free (or even 16 GB if you assumed the cache would be empty). This is the error I get (and the free command) root@server: ~ # java Error occurred during initialization of VM Could not reserve enough space for object heap Error: Could not create the Java Virtual Machine. Error: A fatal exception has occurred. Program will exit. root@server: ~ # free total used free shared buffers cached Mem: 25165824 15941148 9224676 0 0 7082176 -/+ buffers/cache: 8858972 16306852 Swap: 0 0 0 I am running a 64 bit Debian on a virtualized server. The virtualization software is OpenVZ. This is my Java version (I can execute this command after I stop two of my VM's (4 currently running)): root@server: ~ # java -version java version "1.7.0_45" Java(TM) SE Runtime Environment (build 1.7.0_45-b18) Java HotSpot(TM) 64-Bit Server VM (build 24.45-b08, mixed mode) What could I do? As requested: root@server: ~ # cat /proc/meminfo MemTotal: 25165824 kB MemFree: 11723412 kB Cached: 4597552 kB Active: 9692308 kB Inactive: 3322544 kB Active(anon): 7411960 kB Inactive(anon): 1005340 kB Active(file): 2280348 kB Inactive(file): 2317204 kB Unevictable: 0 kB Mlocked: 0 kB SwapTotal: 0 kB SwapFree: 0 kB Dirty: 960 kB Writeback: 0 kB AnonPages: 8417300 kB Shmem: 21504 kB Slab: 427452 kB SReclaimable: 383424 kB SUnreclaim: 44028 kB Reguest2: root@server: ~ # cat /proc/user_beancounters Version: 2.5 uid resource held maxheld barrier limit failcnt 10023468: kmemsize 399250512 506245120 5053325720 5558658292 0 lockedpages 0 8 246744 246744 0 privvmpages 6005602 6291447 6291456 6291456 221 shmpages 8576 8608 579124 579124289562 0 dummy 0 0 9223372036854775807 9223372036854775807 0 numproc 598 1236 30000 30000 0 physpages 4634494 6291456 6291456 6291456 0 vmguarpages 0 0 6291456 9223372036854775807 0 oomguarpages 1529371 2144671 6291456 9223372036854775807 0 numtcpsock 62 164 30000 30000 0 numflock 25 39 1000 1100 0 numpty 13 24 512 512 0 numsiginfo 10 75 1024 1024 0 tcpsndbuf 3330352 4153232 1179110194 1684441906 0 tcprcvbuf 1216896 34410032 1179110194 1684441906 0 othersockbuf 270504 537552 589555096 1094886808 0 dgramrcvbuf 0 67048 589555096 589555096 0 numothersock 287 333 30000 30000 0 dcachesize 355559855 446054103 1103879952 1136996352 0 numfile 4766 7745 250000 250000 0 dummy 0 0 9223372036854775807 9223372036854775807 0 dummy 0 0 9223372036854775807 9223372036854775807 0 dummy 0 0 9223372036854775807 9223372036854775807 0 numiptent 14 14 1000 1000 0 root@server: ~ # java Error occurred during initialization of VM Could not reserve enough space for object heap Error: Could not create the Java Virtual Machine. Error: A fatal exception has occurred. Program will exit. root@server: ~ # cat /proc/user_beancounters Version: 2.5 uid resource held maxheld barrier limit failcnt 10023468: kmemsize 399246622 506245120 5053325720 5558658292 0 lockedpages 0 8 246744 246744 0 privvmpages 6005601 6291447 6291456 6291456 233 shmpages 8576 8608 579124 579124289562 0 dummy 0 0 9223372036854775807 9223372036854775807 0 numproc 598 1236 30000 30000 0 physpages 4635460 6291456 6291456 6291456 0 vmguarpages 0 0 6291456 9223372036854775807 0 oomguarpages 1529376 2144671 6291456 9223372036854775807 0 numtcpsock 64 164 30000 30000 0 numflock 25 39 1000 1100 0 numpty 13 24 512 512 0 numsiginfo 10 75 1024 1024 0 tcpsndbuf 3365232 4153232 1179110194 1684441906 0 tcprcvbuf 1249664 34410032 1179110194 1684441906 0 othersockbuf 270504 537552 589555096 1094886808 0 dgramrcvbuf 0 67048 589555096 589555096 0 numothersock 287 333 30000 30000 0 dcachesize 355559855 446054103 1103879952 1136996352 0 numfile 4768 7745 250000 250000 0 dummy 0 0 9223372036854775807 9223372036854775807 0 dummy 0 0 9223372036854775807 9223372036854775807 0 dummy 0 0 9223372036854775807 9223372036854775807 0 numiptent 14 14 1000 1000 0
OpenVZ & Memory The failcnt is going up on privvmpages, so your container is unable to allocate any more virtual memory space from the host: root@server: ~ # cat /proc/user_beancounters Version: 2.5 uid resource held maxheld barrier limit failcnt privvmpages 6005601 6291447 6291456 6291456 >233< physpages 4635460 6291456 6291456 6291456 0 vmguarpages 0 0 6291456 9223372036854775807 0 oomguarpages 1529376 2144671 6291456 9223372036854775807 0 Note that virtual memory != physical memory. Processes can allocate up to somewhere around the addressable amount of virtual memory (32bit ~ 2G - 4G, 64bit 8 TB - 256 TB) but that doesn't mean physical memory pages are being used ( a page being a 4KB chunk of memory). physpages is the number of physical memory pages your container can use. oomguarpages is the guaranteed memory pages the container will receive when the host is memory constrained. privvmpages is the number of virtual memory pages your container can use vmguarpages is the guaranteed amount of virtual memory in the same way Java Oracle Java will always allocate one contiguous chunk of virtual memory. Running java with no arguments on a box results in 5M of real memory used (RSS), but 660M of VM space allocated (VSZ): PID COMMAND VSZ RSS 20816 java 667496 4912 Looking at the memory segments for the java process in it's smaps file shows a chunk of about 500MB allocated, the rest is memory mapped files and normal java stuff. On a system that's been up for a while the available VM space becomes fragmented as processes use/free parts of it. A grep Vmalloc /proc/meminfo will give you VmallocChunk which is the largest free chunk currently available. If this is low, the system will try and allocate more when java requests it, after all it's virtually unlimited on a 64bit box. Fix Tell your host to configure privvmpages and vmguarpages much higher. There's no need for them to be the same as physical memory as that impacts the way linux memory works You might be able to work around the problem temporarily by dropping your file cache echo 1 > /proc/sys/vm/drop_caches but that's only temporary. You can limit the chunk of memory java tries to allocate at run time with a minimum Xms or while running with maximum Xmx. Running java with these options on my machine: java -Xms10M -Xmx10M reduces the total virtual size to 140MB or so with only a 10MB contiguous chunk for the java heap allocated.
Java "Could not reserve enough space for object heap" even though there is enough RAM
1,377,261,966,000
Memtester has outputted the following response, memtester version 4.3.0 (64-bit) Copyright (C) 2001-2012 Charles Cazabon. Licensed under the GNU General Public License version 2 (only). pagesize is 4096 pagesizemask is 0xfffffffffffff000 want 10240MB (10737418240 bytes) got 10240MB (10737418240 bytes), trying mlock ...locked. Loop 1/1: Stuck Address : testing 1FAILURE: possible bad address line at offset 0x12325b7a8. Skipping to next test... Random Value : ok FAILURE: 0xa003776ad640ac0c != 0xe003776ad640ac0c at offset 0x7a4f2680. Compare XOR : FAILURE: 0xe7139f89d94112c0 != 0x27139f89d94112c0 at offset 0x7a4f2680. FAILURE: 0x4e53ee3a9704bdf5 != 0x4a53ee3a9704bdf5 at offset 0x950b4930. Compare SUB : FAILURE: 0x96ecab120464e9c0 != 0xd6ecab120464e9c0 at offset 0x7a4f2680. FAILURE: 0x7f67022cef637b99 != 0x2b67022cef637b99 at offset 0x950b4930. FAILURE: 0x96c38c9f6e6dd229 != 0xd6c38c9f6e6dd229 at offset 0xe40d2b50. Compare MUL : FAILURE: 0x00000001 != 0x00000002 at offset 0x69394a08. FAILURE: 0x00000001 != 0x00000000 at offset 0x950b4930. FAILURE: 0x400000000000001 != 0x00000001 at offset 0xea6b07a8. FAILURE: 0x400000000000000 != 0x00000000 at offset 0xfb853610. FAILURE: 0x00000000 != 0x800000000000000 at offset 0x12bf3ed10. Compare DIV : FAILURE: 0x777fd9f1ddc6c1cd != 0x777fd9f1ddc6c1cf at offset 0x69394a08. FAILURE: 0x777fd9f1ddc6c1cd != 0x7f7fd9f1ddc6c1cd at offset 0x12bf3ed10. Compare OR : FAILURE: 0x367600d19dc6c040 != 0x367600d19dc6c042 at offset 0x69394a08. FAILURE: 0x367600d19dc6c040 != 0x767600d19dc6c040 at offset 0x7a4f2680. FAILURE: 0x367600d19dc6c040 != 0x3e7600d19dc6c040 at offset 0x12bf3ed10. Compare AND : Sequential Increment: ok Solid Bits : testing 0FAILURE: 0x4000000000000000 != 0x00000000 at offset 0x12325b7a8. Block Sequential : testing 0FAILURE: 0x400000000000000 != 0x00000000 at offset 0xfb853610. Checkerboard : testing 1FAILURE: 0xaaaaaaaaaaaaaaaa != 0xeaaaaaaaaaaaaaaa at offset 0x7a4f2680. Bit Spread : testing 1FAILURE: 0xdffffffffffffff5 != 0xfffffffffffffff5 at offset 0x102e353e8. Bit Flip : testing 0FAILURE: 0x4000000000000001 != 0x00000001 at offset 0x12325b7a8. Walking Ones : testing 40FAILURE: 0xdffffeffffffffff != 0xfffffeffffffffff at offset 0x102e353e8. Walking Zeroes : testing 0FAILURE: 0x400000000000001 != 0x00000001 at offset 0xea6b07a8. FAILURE: 0x400000000000001 != 0x00000001 at offset 0xfb853610. 8-bit Writes : -FAILURE: 0xfeefa0a577dfa825 != 0xdeefa0a577dfa825 at offset 0x4bd600e8. 16-bit Writes : -FAILURE: 0xf3dfa5fff79e950b != 0xf7dfa5fff79e950b at offset 0x2b04cca8. FAILURE: 0x3ffb3fc56e7532c1 != 0x7ffb3fc56e7532c1 at offset 0xe40d2b50. Done. Clearly this shows bad memory. Is it possible to mark this memory as bad in the kernel or hypervisor and keep using it? Or is to put it in File 13 and buy replacement?
Unless you can detect errors reasonably quickly, e.g. with ECC memory or by rebooting regularly with memtest, it’s better to replace the module. You risk silent data corruption. You can tell the kernel to ignore memory by reserving it, with the memmap option (see the kernel documentation for details): memmap=nn[KMG]$ss[KMG] [KNL,ACPI] Mark specific memory as reserved. Region of memory to be reserved is from ss to ss+nn. Example: Exclude memory from 0x18690000-0x1869ffff memmap=64K$0x18690000 or memmap=0x10000$0x18690000 Some bootloaders may need an escape character before '$', like Grub2, otherwise '$' and the following number will be eaten. The difficult part here is figuring out what address ranges to reserve; memtester gives you addresses from its virtual address space, which don’t match physical addresses as needed for memmap. The simplest approach is to boot with memtest, you'll see something like this 4c494e5558726c7a bad mem addr 0x000000012f9eaa78 - 0x000000012f9eaa80 reserved 4c494e5558726c7a bad mem addr 0x00000001b86fe928 - 0x00000001b86fe930 reserved 0x000000012f9eaa80 - 0x00000001b86fe928 pattern 4c494e5558726c7a The kernel will then inactivate the range that it detects to be bad. You can continue booting with memtest, or use the reserved address ranges to construct memmap arguments instead.
What can I do with the output of memtester when it shows bad memory?
1,377,261,966,000
If I have a tmpfs set to 50%, and later on I add or remove RAM, does tmpfs automatically adjust its partition size? Also what if I have multiple tmpfs each set at 50%. Do multiple tmpfs compete against each other for the same 50%? How is this managed by the OS?
If you mount a tmpfs instance with a percentage it will take the percent size of the systems physical ram. For instance, if you have 2gb of physical ram and you mount a tmpfs with 50%, your tmpfs will have a size of 1gb. In your scenario, you add physical ram to your system, let's say another 2gb, that your system has 4gb of physical ram. When mounting the tmpfs it will have a size of 2gb now. When mounting multiple instances of tmpfs each with 50% set, it will work. If both tmpfs instances were filled completely, the system will swap out the lesser used pages. If swap space is full too, you will have No space left on device errors. Edit: tmpfs only uses the amount of memory that is taken, not the full 50%. So, if only 10mb of those 1gb are taken, your tmpfs instance only occupies those 10mb. It's not not reserved, it's dynmically. With multiple instances of 50%, the first one that need memory gets memory. The system swapps the lesser used pages, if 50% is occupied or not. The tmpfs instance is not aware of the fact whether it uses physical ram or swap space. You can mount a tmpfs of 100gb if you want and it will work. I assume that you shut the system down before adding ram. So the tmpfs is remounted at startup anyway. If you add ram while the system runs, you will fry the ram, the motherboard and most likely your hand. I can't really recommand that :-) Sources: Kernel Documentation
Does tmpfs automatically resize when the amount RAM changes, and does it compete when there's multiple tmpfs?
1,377,261,966,000
When I see that phrase (or similar), as e.g. today in How to Use the free Command on Linux (article with 2020 date): RAM that isn’t being used for something is wasted RAM I recall about LPDDR used for mobile devices: Additional savings come from temperature-compensated refresh (DRAM requires refresh less often at low temperatures), partial array self refresh, and a "deep power down" mode which sacrifices all memory contents. As Android is based on Linux kernel, does it already supports putting part of memory in "deep power down"? Some kernel parameters to enable managing data in a way to minimize total memory usage? In total: has Linux kernel abandoned universally applying "RAM that isn’t being used for something is wasted RAM" approach?
Has Linux kernel abandoned universally applying "RAM that isn’t being used for something is wasted RAM" approach? No, it hasn’t: it is still the case that the kernel will not try to avoid using memory which is available. However, it supports memory hotplug, which could conceivably be paired with features such as those offered by LPDDR to reduce power consumption: a given memory chip could be relinquished, hot-“unplugged”, and powered down. Whether all that would actually result in reduced power consumption overall is a whole other debate.
Has Linux kernel abandoned universally applying "RAM that isn’t being used for something is wasted RAM" approach (e.g for mobile devices)?
1,377,261,966,000
I'm using CentOS 7, I find my available memory is less than free memory, but why? root@localhost:~# free -h total used free shared buff/cache available Mem: 251G 1.9G 249G 9.2M 260M 248G Swap: 64M 49M 14M There is a same problem, but the answer did not explain why the available is less than free, it is just talk about the cache. why centos7 free command output available value less than free value
The available memory is just a estimate of how memory can be really used in your system for loading programs, so it is not a precise value. As you probably already knows the normal behavior is to have the available memory bigger than the free memory, but in your case the opposite occurs, because the statistics used to calculate this estimated value will be helped by greater cache/buffers values, but they are penalized in your system because you dont have high cache or buffers, and because all the other things it takes into negative account, your available memory will get greater negative impact... so it is probably underestimated, as it will consider that this percentage of all your free memory, will be necessary for a lot of other things than simple loading programs (specially when you load programs - system will need more and more memory to store informations about the processes and much more - also like having a reasonable value of caches and buffers......). From github: MemAvailable: An estimate of how much memory is available for starting new applications, without swapping. Calculated from MemFree, SReclaimable, the size of the file LRU lists, and the low watermarks in each zone. The estimate takes into account that the system needs some page cache to function well, and that not all reclaimable slab will be reclaimable, due to items being in use. The impact of those factors will vary from system to system. To get a more detailed answer, you will need to post the contents of your /proc/meminfo.
Why the available memory is less than the free memory in free command?
1,377,261,966,000
For a while, I encounter RAM-shortages on my Debian webserver (VPS/virtual machine). This would not be unusual, if they happend on a regular basis. But they do not. Here's a chart from Munin:                   To solve such riddles, I tracked my system with atop. Here're two snapshots from 7:00AM and 9:00AM - during and after the RAM shortage (using the -m option to see the memory-related information): ATOP - <snip> 2014/09/10 07:00:02 ------ 10m0s elapsed <snip> MEM | tot 2.0G | free 79.1M | cache 102.4M | dirty 0.1M | buff 53.2M | slab 90.8M | | | SWP | tot 2.0G | free 2.0G | | | | | vmcom 748.1M | vmlim 3.0G | DSK | sda | busy 1% | read 917 | write 1695 | KiB/w 13 | MBr/s 0.01 | MBw/s 0.04 | avio 1.22 ms | <snip> PID MINFLT MAJFLT VSTEXT VSIZE RSIZE VGROW RGROW RUID EUID MEM CMD 1/15 13717 102 18 10709K 874.5M 206.2M 0K 128K mysql mysql 10% mysqld 4086 166 0 450K 228.1M 21896K 0K 0K www-data www-data 1% apache2 19131 1659 99 450K 225.5M 19604K -2652K -2292K www-data www-data 1% apache2 1469 608 0 450K 222.6M 18508K 256K 64K www-data www-data 1% apache2 23038 347 0 450K 222.3M 18496K 0K 0K www-data www-data 1% apache2 4085 721 0 450K 222.1M 18308K 0K 0K www-data www-data 1% apache2 10639 790 0 450K 224.9M 18284K 768K 932K www-data www-data 1% apache2 19158 199 1 450K 222.1M 18064K 0K 52K www-data www-data 1% apache2 1895 330 0 450K 221.8M 18020K 0K 0K www-data www-data 1% apache2 6661 3346 22 450K 224.0M 17700K 1512K -780K www-data www-data 1% apache2 12570 808 0 450K 221.7M 17668K 512K 508K www-data www-data 1% apache2 19817 0 0 450K 214.5M 15336K 0K 0K root root 1% apache2 18209 3996 0 2277K 55592K 14728K 55592K 14728K till till 1% python 18210 2760 0 4K 43292K 10544K 43292K 10544K munin munin 1% munin-update 11976 506 0 149K 18788K 6512K 0K 0K root root 0% atop 1934 175 0 4K 52228K 5852K 0K 0K root root 0% munin-node 17993 0 0 4K 67020K 5712K 0K 0K postgrey postgrey 0% /usr/sbin/post 2000 0 0 346K 244.3M 5668K 0K 0K root root 0% rsyslogd 14557 0 0 7163K 234.9M 5284K 0K 0K root root 0% php5-fpm 14558 0 0 7163K 234.9M 4564K 0K 0K www-data www-data 0% php5-fpm 14559 0 0 7163K 234.9M 4564K 0K 0K www-data www-data 0% php5-fpm 328 0 0 134K 572.6M 2932K 0K 0K root root 0% console-kit-da <snip> And... ATOP - vmd1989 2014/09/10 09:00:02 ------ 10m0s elapsed <snip> MEM | tot 2.0G | free 1.5G | cache 88.8M | dirty 0.1M | buff 19.2M | slab 25.8M | | | SWP | tot 2.0G | free 2.0G | | | | | vmcom 748.0M | vmlim 3.0G | DSK | sda | busy 0% | read 453 | write 1991 | KiB/w 12 | MBr/s 0.01 | MBw/s 0.04 | avio 1.01 ms | <snip> PID MINFLT MAJFLT VSTEXT VSIZE RSIZE VGROW RGROW RUID EUID MEM CMD 1/16 13717 189 0 10709K 874.5M 206.3M 0K 0K mysql mysql 10% mysqld 23038 743 7 450K 222.6M 18620K 0K 40K www-data www-data 1% apache2 23930 692 0 450K 220.6M 18568K 0K 0K www-data www-data 1% apache2 28738 4784 0 4K 126.4M 18328K 126.4M 18328K munin munin 1% munin-update 26990 392 1 450K 220.5M 18088K 0K 112K www-data www-data 1% apache2 26552 1150 2 450K 220.3M 17788K 512K 576K www-data www-data 1% apache2 28744 1443 0 4K 129.1M 17636K 129.1M 17636K munin munin 1% /usr/share/mun 27424 602 0 450K 219.8M 17504K 8K 240K www-data www-data 1% apache2 27000 216 0 450K 219.8M 17308K 8K 104K www-data www-data 1% apache2 28290 2977 0 450K 219.9M 17200K 219.9M 17200K www-data www-data 1% apache2 19817 68 0 450K 214.5M 15340K 0K 0K root root 1% apache2 28287 429 1 450K 215.0M 10384K 215.0M 10384K www-data www-data 1% apache2 28727 184 0 450K 214.5M 9300K 214.5M 9300K www-data www-data 0% apache2 28728 191 0 450K 214.5M 9300K 214.5M 9300K www-data www-data 0% apache2 11976 490 0 149K 18788K 6512K 0K 0K root root 0% atop 1934 428 0 4K 52228K 5852K 0K 0K root root 0% munin-node 2000 0 0 346K 244.3M 5668K 0K 0K root root 0% rsyslogd 28745 1036 0 4K 52228K 5580K 52228K 5580K root root 0% munin-node [:: 14557 0 0 7163K 234.9M 5284K 0K 0K root root 0% php5-fpm 17993 0 0 4K 67020K 4844K 0K 0K postgrey postgrey 0% /usr/sbin/post 14558 0 0 7163K 234.9M 4564K 0K 0K www-data www-data 0% php5-fpm 14559 0 0 7163K 234.9M 4564K 0K 0K www-data www-data 0% php5-fpm 328 0 0 134K 572.6M 2932K 0K 0K root root 0% console-kit-da <snip> Sorry for the long lists - just do not want to miss the cause. Yet, my problem is: I do not see the cause. There is significantly less "free" memory in the status (top), but no process that would explain why, where is the memory going... Is my thinking incorrect with this? Update According to Patrick's advice, I collected /proc/meminfo - during a phase of RAM shortage and later. In sake of easy visibility, I put the content into one table: mem-shortage a bit later MemTotal: 2060776 kB 2060776 kB MemFree: 252896 kB 1608532 kB * Buffers: 15464 kB 12060 kB Cached: 71864 kB 62800 kB SwapCached: 4160 kB 4160 kB Active: 268020 kB 253368 kB Inactive: 134988 kB 132300 kB Active(anon): 225940 kB 220872 kB Inactive(anon): 97296 kB 220872 kB * Active(file): 42080 kB 32496 kB Inactive(file): 37692 kB 29116 kB Unevictable: 6540 kB 6680 kB Mlocked: 6540 kB 6680 kB SwapTotal: 2096476 kB 2096476 kB SwapFree: 2081568 kB 2081568 kB Dirty: 0 kB 116 kB Writeback: 0 kB 0 kB AnonPages: 318084 kB 313364 kB Mapped: 20692 kB 20408 kB Shmem: 4208 kB 9896 kB Slab: 24336 kB 23936 kB SReclaimable: 10252 kB 9316 kB SUnreclaim: 14084 kB 14620 kB KernelStack: 1464 kB 1544 kB PageTables: 8396 kB 9544 kB NFS_Unstable: 0 kB 0 kB Bounce: 0 kB 0 kB WritebackTmp: 0 kB 0 kB CommitLimit: 3126864 kB 3126864 kB Committed_AS: 744764 kB 761812 kB VmallocTotal: 34359738367 kB 34359738367 kB VmallocUsed: 272976 kB 272976 kB VmallocChunk: 34359464431 kB 34359464431 kB HardwareCorrupted: 0 kB 0 kB AnonHugePages: 0 kB 0 kB HugePages_Total: 0 0 HugePages_Free: 0 0 HugePages_Rsvd: 0 0 HugePages_Surp: 0 0 Hugepagesize: 2048 kB 2048 kB DirectMap4k: 282560 kB 282560 kB DirectMap2M: 1814528 kB 1814528 kB I only see two signficant (not in the statistical sense) differences, marked with an asterisk (*), but I do not think, they tell me where the RAM went. I also checked for shared memory (as good as I could) ... and found none. # ipcs -m ------ Shared Memory Segments -------- key shmid owner perms bytes nattch status I also check for hidden processes using unhide. But except for a false positive (known issue with Debian), there seem not to be any hidden processes. Any more ideas why 1.2 GB RAM are in use - and then not? Could this be another issue caused by the virtual server architecture? Update I followed Sergio's hint to consult lsmod and check for memory ballooning. The size column does not tell anything helpful, but there's a process vmw_balloon - so it seems to actually be an issue of shifting memory between the virtual machines. Question answered :) # During high RAM usage (removed middle part) $ lsmod | sort -r -k 2,2n Module Size Used by crc16 12343 1 ext4 crc_t10dif 12348 1 sd_mod libcrc32c 12426 2 xfs,btrfs mperf 12453 0 ata_generic 12490 0 pcspkr 12632 0 vmw_balloon 12657 0 <= ac 12668 0 i2c_piix4 12704 0 coretemp 12898 0 <snip> reiserfs 193501 0 drm 211856 2 ttm,vmwgfx ext4 381419 1 xfs 628913 0 btrfs 641551 0
Probably your virtual machine is suffering some kind of memory ballooning operation ordered from the virtualization platform. You can try to confirm this by looking for a related module with lsmod (the name changes from one virtualization platform to another, but it should be pretty distinctive). When memory ballooning is enabled, a virtualization Host can move memory resources from one VM to another, when needed. At the request of said Host, the kernel module from the Guest reserves the indicated amount of physical RAM (physical from the viewpoint of OS running on the Guest), to be sure that no other process can make use of it. Then the Host reassigns the real physical resources to another Guest. The effect on the Guest is exactly what you're seeing, a lot of memory used memory with no apparent owner. If you don't have control of that virtualization platform, you should ask your provider for information about the actual configuration of the ballooning parameters for your virtual machine.
Where has my RAM gone & how to interpret atop's memory output?
1,377,261,966,000
My processor is using a big part of my RAM memory as cache and I want to clean it up because of that; will it prejudice something?
There is no need to do this, the kernel manages RAM efficiently by using it for caches and buffers if it is not needed by processes. If processes request more RAM the kernel will deallocate caches and buffers if necessary to satisfy the request. This ServerFault answer explains how to interpret the memory usage reported by free.
How to clean up the RAM memory that is being used as cache memory?
1,377,261,966,000
My current computer is unable to run FullHD movies smoothly and I was already resigned to the idea, because it seemed a graphics card issue, mine not being powerful enough to do the work (which is still very probable). But recently a friend of mine bought a SSD and put it in a similar specs laptop and he is now able to run FullHD movies. Now I have the doubt about it being a r/w speed problem and not a GPU problem. The question now, since I'm curious, I don't have a SSD and I have time to waste to run experiments. Is it possible to load the file on RAM and read it from there hoping the RAM reading speed will be similar to that of the SSD?
Yes it is possible. You can first mount a tmpfs partition and then play your video file from there. I mount my /tmp partition in RAM since the contents do not need to be preserved between reboots and there are definite speed benefits. Here is my entry in my /etc/fstab which creates it on each boot: tmpfs /tmp tmpfs defaults,rw,mode=1777,size=3G 0 0 You can do something similar using the mount command as root.
Preload movie on RAM
1,558,401,430,000
I reinstalled a fresh debian 10 on an old x86 system with 512MB RAM (everything works ok). Available memory is 431MB. (No graphic card plugged right now) I don't think that that much memory was "reserved" on an old 3.x kernel $ free -m total used free shared buff/cache available Mem: 431 59 311 4 60 355 $ cat /proc/meminfo MemTotal: 441568 kB There is much more in the log about memory, not sure what is relevant. I am just curious of where the lost RAM goes. EDIT : whole dmesg [ 0.000000] Linux version 4.19.0-5-686-pae ([email protected]) (gcc version 8.3.0 (Debian 8.3.0-7)) #1 SMP Debian 4.19.37-3 (2019-05-15) [ 0.000000] x86/fpu: x87 FPU will use FXSAVE [ 0.000000] BIOS-provided physical RAM map: [ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable [ 0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000000dc000-0x00000000000dffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved [ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x000000001fffffff] usable [ 0.000000] BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000ffff0000-0x00000000ffffffff] reserved [ 0.000000] Notice: NX (Execute Disable) protection missing in CPU! [ 0.000000] Legacy DMI 2.0 present. [ 0.000000] DMI: Micro-Star Inc. INTEL 440LX/INTEL 440LX, BIOS 0627 07/15/95 [ 0.000000] tsc: Fast TSC calibration using PIT [ 0.000000] tsc: Detected 334.067 MHz processor [ 0.003598] e820: update [mem 0x00000000-0x00000fff] usable ==> reserved [ 0.003633] e820: remove [mem 0x000a0000-0x000fffff] usable [ 0.003684] last_pfn = 0x20000 max_arch_pfn = 0x1000000 [ 0.003731] MTRR default type: uncachable [ 0.003741] MTRR fixed ranges enabled: [ 0.003758] 00000-9FFFF write-back [ 0.003773] A0000-FFFFF uncachable [ 0.003782] MTRR variable ranges enabled: [ 0.003798] 0 base 000000000 mask FE0000000 write-back [ 0.003807] 1 disabled [ 0.003815] 2 disabled [ 0.003823] 3 disabled [ 0.003831] 4 disabled [ 0.003839] 5 disabled [ 0.003847] 6 disabled [ 0.003855] 7 disabled [ 0.007325] x86/PAT: PAT not supported by CPU. [ 0.007771] x86/PAT: Configuration [0-7]: WB WT UC- UC WB WT UC- UC [ 0.050001] found SMP MP-table at [mem 0x000fb250-0x000fb25f] [ 0.112510] initial memory mapped: [mem 0x00000000-0x1affffff] [ 0.112544] Base memory trampoline at [(ptrval)] 9b000 size 16384 [ 0.112573] Kernel/User page tables isolation: disabled on command line. [ 0.113047] BRK [0x1ab82000, 0x1ab83fff] PGTABLE [ 0.113082] BRK [0x1ab84000, 0x1ab84fff] PGTABLE [ 0.113105] BRK [0x1ab85000, 0x1ab85fff] PGTABLE [ 0.113219] RAMDISK: [mem 0x1e40a000-0x1f885fff] [ 0.113277] 0MB HIGHMEM available. [ 0.113291] 512MB LOWMEM available. [ 0.113299] mapped low ram: 0 - 20000000 [ 0.113307] low ram: 0 - 20000000 [ 0.113367] BRK [0x1ab86000, 0x1ab86fff] PGTABLE [ 0.113399] Zone ranges: [ 0.113408] DMA [mem 0x0000000000001000-0x0000000000ffffff] [ 0.113425] Normal [mem 0x0000000001000000-0x000000001fffffff] [ 0.113440] HighMem empty [ 0.113451] Movable zone start for each node [ 0.113457] Early memory node ranges [ 0.113469] node 0: [mem 0x0000000000001000-0x000000000009efff] [ 0.113480] node 0: [mem 0x0000000000100000-0x000000001fffffff] [ 0.113496] Initmem setup node 0 [mem 0x0000000000001000-0x000000001fffffff] [ 0.113513] On node 0 totalpages: 130974 [ 0.142335] DMA zone: 40 pages used for memmap [ 0.142352] DMA zone: 0 pages reserved [ 0.142364] DMA zone: 3998 pages, LIFO batch:0 [ 0.143743] Normal zone: 1240 pages used for memmap [ 0.143760] Normal zone: 126976 pages, LIFO batch:31 [ 0.185825] Using APIC driver default [ 0.185978] SFI: Simple Firmware Interface v0.81 http://simplefirmware.org [ 0.194121] Intel MultiProcessor Specification v1.1 [ 0.194130] Virtual Wire compatibility mode. [ 0.194215] MPTABLE: OEM ID: MSI [ 0.194223] MPTABLE: Product ID: [ 0.194233] MPTABLE: APIC at: 0xFEE00000 [ 0.194254] Processor #0 (Bootup-CPU) [ 0.194272] Processor #1 [ 0.194367] IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 [ 0.194468] Processors: 2 [ 0.194485] smpboot: Allowing 2 CPUs, 0 hotplug CPUs [ 0.194647] PM: Registered nosave memory: [mem 0x00000000-0x00000fff] [ 0.194669] PM: Registered nosave memory: [mem 0x0009f000-0x0009ffff] [ 0.194679] PM: Registered nosave memory: [mem 0x000a0000-0x000dbfff] [ 0.194688] PM: Registered nosave memory: [mem 0x000dc000-0x000dffff] [ 0.194697] PM: Registered nosave memory: [mem 0x000e0000-0x000effff] [ 0.194706] PM: Registered nosave memory: [mem 0x000f0000-0x000fffff] [ 0.194733] [mem 0x20000000-0xfebfffff] available for PCI devices [ 0.194744] Booting paravirtualized kernel on bare hardware [ 0.194780] clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 7645519600211568 ns [ 0.194914] random: get_random_bytes called from start_kernel+0x81/0x45f with crng_init=0 [ 0.194994] setup_percpu: NR_CPUS:32 nr_cpumask_bits:32 nr_cpu_ids:2 nr_node_ids:1 [ 0.197559] percpu: Embedded 29 pages/cpu s89932 r0 d28852 u118784 [ 0.197634] pcpu-alloc: s89932 r0 d28852 u118784 alloc=29*4096 [ 0.197648] pcpu-alloc: [0] 0 [0] 1 [ 0.197835] Built 1 zonelists, mobility grouping on. Total pages: 129694 [ 0.197861] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-4.19.0-5-686-pae root=UUID=375c43d8-1ed9-48c6-a196-9787ccb61863 ro quiet acpi=off nopti nospectre_v2 nospec_store_bypass_disable [ 0.200664] Dentry cache hash table entries: 65536 (order: 6, 262144 bytes) [ 0.201381] Inode-cache hash table entries: 32768 (order: 5, 131072 bytes) [ 0.201402] BRK [0x1ab87000, 0x1ab87fff] PGTABLE [ 0.201574] Initializing CPU#0 [ 0.585340] Initializing HighMem for node 0 (00000000:00000000) [ 0.648160] Memory: 419336K/523896K available (6751K kernel code, 660K rwdata, 2068K rodata, 880K init, 452K bss, 104560K reserved, 0K cma-reserved, 0K highmem) [ 0.648224] virtual kernel memory layout: fixmap : 0xffd35000 - 0xfffff000 (2856 kB) cpu_entry : 0xff400000 - 0xff8e1000 (4996 kB) pkmap : 0xff000000 - 0xff200000 (2048 kB) vmalloc : 0xe0800000 - 0xfeffe000 ( 487 MB) lowmem : 0xc0000000 - 0xe0000000 ( 512 MB) .init : 0xda955000 - 0xdaa31000 ( 880 kB) .data : 0xda697dd8 - 0xda945300 (2741 kB) .text : 0xda000000 - 0xda697dd8 (6751 kB) [ 0.648233] Checking if this processor honours the WP bit even in supervisor mode...Ok. [ 0.649959] SLUB: HWalign=32, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 [ 0.649986] ftrace: allocating 29700 entries in 59 pages [ 0.917280] rcu: Hierarchical RCU implementation. [ 0.917307] rcu: RCU restricting CPUs from NR_CPUS=32 to nr_cpu_ids=2. [ 0.917321] rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 [ 0.971555] NR_IRQS: 2304, nr_irqs: 440, preallocated irqs: 16 [ 0.972398] CPU 0 irqstacks, hard=(ptrval) soft=(ptrval) [ 0.973212] Console: colour dummy device 80x25 [ 0.973266] console [tty0] enabled [ 0.973397] APIC: Switch to symmetric I/O mode setup [ 0.973423] Enabling APIC mode: Flat. Using 1 I/O APICs [ 0.973801] ExtINT not setup in hardware but reported by MP table [ 0.975451] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=0 pin2=0 [ 0.993429] clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x4d0bcc23f1, max_idle_ns: 440795205856 ns [ 0.993549] Calibrating delay loop (skipped), value calculated using timer frequency.. 668.13 BogoMIPS (lpj=1336268) [ 0.993579] pid_max: default: 32768 minimum: 301 [ 0.994118] Security Framework initialized [ 0.994137] Yama: disabled by default; enable with sysctl kernel.yama.* [ 0.994428] AppArmor: AppArmor initialized [ 0.994682] Mount-cache hash table entries: 1024 (order: 0, 4096 bytes) [ 0.994726] Mountpoint-cache hash table entries: 1024 (order: 0, 4096 bytes) [ 0.997334] mce: CPU supports 5 MCE banks [ 0.997553] Last level iTLB entries: 4KB 32, 2MB 0, 4MB 2 [ 0.997567] Last level dTLB entries: 4KB 64, 2MB 0, 4MB 8, 1GB 0 [ 0.997586] Speculative Store Bypass: Vulnerable [ 0.997761] MDS: Vulnerable: Clear CPU buffers attempted, no microcode [ 0.998665] Freeing SMP alternatives memory: 24K [ 1.117524] smpboot: CPU0: Intel Pentium II (Deschutes) (family: 0x6, model: 0x5, stepping: 0x1) [ 1.119725] Performance Events: p6 PMU driver. [ 1.119776] ... version: 0 [ 1.119785] ... bit width: 32 [ 1.119791] ... generic registers: 2 [ 1.119800] ... value mask: 00000000ffffffff [ 1.119808] ... max period: 000000007fffffff [ 1.119814] ... fixed-purpose events: 0 [ 1.119821] ... event mask: 0000000000000003 [ 1.120534] rcu: Hierarchical SRCU implementation. [ 1.126044] NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. [ 1.126734] smp: Bringing up secondary CPUs ... [ 1.128629] CPU 1 irqstacks, hard=(ptrval) soft=(ptrval) [ 1.128641] x86: Booting SMP configuration: [ 1.128652] .... node #0, CPUs: #1 [ 0.005020] Initializing CPU#1 [ 0.005020] [Firmware Bug]: CPU1: APIC id mismatch. Firmware: 1 APIC: 0 [ 1.214072] smp: Brought up 1 node, 2 CPUs [ 1.214072] smpboot: Max logical packages: 2 [ 1.214072] smpboot: Total of 2 processors activated (1336.36 BogoMIPS) [ 1.221630] devtmpfs: initialized [ 1.225770] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 7645041785100000 ns [ 1.225808] futex hash table entries: 512 (order: 3, 32768 bytes) [ 1.226504] pinctrl core: initialized pinctrl subsystem [ 1.228901] NET: Registered protocol family 16 [ 1.230743] audit: initializing netlink subsys (disabled) [ 1.231149] audit: type=2000 audit(1558406150.256:1): state=initialized audit_enabled=0 res=1 [ 1.231162] cpuidle: using governor ladder [ 1.231240] cpuidle: using governor menu [ 1.234422] PCI: Using configuration type 1 for base access [ 1.249805] HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages [ 1.250780] ACPI: Interpreter disabled. [ 1.254036] vgaarb: loaded [ 1.254384] EDAC MC: Ver: 3.0.0 [ 1.255121] PCI: Probing PCI hardware [ 1.255121] PCI: root bus 00: using default resources [ 1.255121] PCI: Probing PCI hardware (bus 00) [ 1.255121] PCI host bridge to bus 0000:00 [ 1.255121] pci_bus 0000:00: root bus resource [io 0x0000-0xffff] [ 1.255121] pci_bus 0000:00: root bus resource [mem 0x00000000-0xfffffffff] [ 1.255121] pci_bus 0000:00: No busn resource found for root bus, will use [bus 00-ff] [ 1.255121] pci 0000:00:00.0: [8086:7180] type 00 class 0x060000 [ 1.255121] pci 0000:00:00.0: reg 0x10: [mem 0xe8000000-0xebffffff pref] [ 1.257972] pci 0000:00:01.0: [8086:7181] type 01 class 0x060400 [ 1.258520] pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 [ 1.259061] pci 0000:00:07.1: [8086:7111] type 00 class 0x010180 [ 1.259161] pci 0000:00:07.1: reg 0x20: [io 0xffa0-0xffaf] [ 1.259210] pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] [ 1.259226] pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] [ 1.259242] pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] [ 1.259256] pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] [ 1.259693] pci 0000:00:07.2: [8086:7112] type 00 class 0x0c0300 [ 1.259814] pci 0000:00:07.2: reg 0x20: [io 0xda00-0xda1f] [ 1.260284] pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 [ 1.260308] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug, * this clock source is slow. Consider trying other clock sources [ 1.260473] pci 0000:00:07.3: quirk: [io 0x6100-0x613f] claimed by PIIX4 ACPI [ 1.260497] pci 0000:00:07.3: quirk: [io 0x5f00-0x5f0f] claimed by PIIX4 SMB [ 1.260979] pci 0000:00:0f.0: [13c1:1001] type 00 class 0x010400 [ 1.261046] pci 0000:00:0f.0: reg 0x10: [io 0xde00-0xde0f] [ 1.261083] pci 0000:00:0f.0: reg 0x14: [mem 0xeffffff0-0xefffffff] [ 1.261118] pci 0000:00:0f.0: reg 0x18: [mem 0xef000000-0xef7fffff] [ 1.261193] pci 0000:00:0f.0: reg 0x30: [mem 0xeffe0000-0xeffeffff pref] [ 1.261282] pci 0000:00:0f.0: supports D1 [ 1.261832] pci 0000:00:12.0: [8086:1026] type 00 class 0x020000 [ 1.261917] pci 0000:00:12.0: reg 0x10: [mem 0xeffc0000-0xeffdffff 64bit] [ 1.261962] pci 0000:00:12.0: reg 0x18: [mem 0xeff80000-0xeffbffff 64bit] [ 1.261996] pci 0000:00:12.0: reg 0x20: [io 0xdc00-0xdc3f] [ 1.262046] pci 0000:00:12.0: reg 0x30: [mem 0xeff40000-0xeff7ffff pref] [ 1.262150] pci 0000:00:12.0: PME# supported from D0 D3hot D3cold [ 1.262701] pci_bus 0000:01: extended config space not accessible [ 1.263002] pci 0000:00:01.0: PCI bridge to [bus 01] [ 1.263028] pci 0000:00:01.0: bridge window [io 0xc000-0xcfff] [ 1.263050] pci 0000:00:01.0: bridge window [mem 0xeed00000-0xeedfffff] [ 1.263071] pci 0000:00:01.0: bridge window [mem 0xe6b00000-0xe6bfffff pref] [ 1.263118] pci_bus 0000:00: busn_res: [bus 00-ff] end is updated to 01 [ 1.263753] pci 0000:00:07.0: PIIX/ICH IRQ router [8086:7110] [ 1.263797] PCI: pci_cache_line_size set to 32 bytes [ 1.263899] e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] [ 1.266414] clocksource: Switched to clocksource tsc-early [ 1.447210] VFS: Disk quotas dquot_6.6.0 [ 1.447437] VFS: Dquot-cache hash table entries: 1024 (order 0, 4096 bytes) [ 1.449141] AppArmor: AppArmor Filesystem Enabled [ 1.449344] pnp: PnP ACPI: disabled [ 1.449366] PnPBIOS: Scanning system for PnP BIOS support... [ 1.449800] PnPBIOS: Found PnP BIOS installation structure at 0x(ptrval) [ 1.449823] PnPBIOS: PnP BIOS version 1.0, entry 0xf0000:0x6dae, dseg 0xf0000 [ 1.450617] pnp 00:00: [mem 0x00000000-0x0009fbff] [ 1.450638] pnp 00:00: [mem 0x0009fc00-0x0009ffff] [ 1.450655] pnp 00:00: [mem 0x000dc000-0x000dffff] [ 1.450670] pnp 00:00: [mem 0x000f0000-0x000fffff] [ 1.450687] pnp 00:00: [mem 0x00100000-0x1fffffff] [ 1.450705] pnp 00:00: [mem 0xfffffffffec00000-0xfffffffffec00fff] [ 1.450723] pnp 00:00: [mem 0xfffffffffee00000-0xfffffffffee00fff] [ 1.450741] pnp 00:00: [mem 0xffffffffffff0000-0xffffffffffffffff] [ 1.451159] system 00:00: [mem 0x00000000-0x0009fbff] could not be reserved [ 1.451184] system 00:00: [mem 0x0009fc00-0x0009ffff] could not be reserved [ 1.451206] system 00:00: [mem 0x000dc000-0x000dffff] could not be reserved [ 1.451227] system 00:00: [mem 0x000f0000-0x000fffff] could not be reserved [ 1.451247] system 00:00: [mem 0x00100000-0x1fffffff] could not be reserved [ 1.451269] system 00:00: [mem 0xfffffffffec00000-0xfffffffffec00fff] could not be reserved [ 1.451291] system 00:00: [mem 0xfffffffffee00000-0xfffffffffee00fff] could not be reserved [ 1.451313] system 00:00: [mem 0xffffffffffff0000-0xffffffffffffffff] could not be reserved [ 1.451377] system 00:00: Plug and Play BIOS device, IDs PNP0c01 (active) [ 1.451513] pnp 00:01: [io 0x0020-0x0021] [ 1.451531] pnp 00:01: [io 0x00a0-0x00a1] [ 1.451551] pnp 00:01: [irq 2] [ 1.451749] pnp 00:01: Plug and Play BIOS device, IDs PNP0000 (active) [ 1.451906] pnp 00:02: [dma 4] [ 1.451924] pnp 00:02: [io 0x0000-0x000f] [ 1.451941] pnp 00:02: [io 0x0080-0x0090] [ 1.451957] pnp 00:02: [io 0x0094-0x009f] [ 1.451974] pnp 00:02: [io 0x00c0-0x00de] [ 1.452200] pnp 00:02: Plug and Play BIOS device, IDs PNP0200 (active) [ 1.452437] pnp 00:03: [irq 0] [ 1.452457] pnp 00:03: [io 0x0040-0x0043] [ 1.452662] pnp 00:03: Plug and Play BIOS device, IDs PNP0100 (active) [ 1.452853] pnp 00:04: [irq 8] [ 1.452872] pnp 00:04: [io 0x0070-0x0071] [ 1.453068] pnp 00:04: Plug and Play BIOS device, IDs PNP0b00 (active) [ 1.453277] pnp 00:05: [irq 1] [ 1.453296] pnp 00:05: [io 0x0060] [ 1.453312] pnp 00:05: [io 0x0064] [ 1.453534] pnp 00:05: Plug and Play BIOS device, IDs PNP0303 (active) [ 1.453761] pnp 00:06: [io 0x0061] [ 1.453964] pnp 00:06: Plug and Play BIOS device, IDs PNP0800 (active) [ 1.454248] pnp 00:07: [irq 13] [ 1.454268] pnp 00:07: [io 0x00f0-0x00ff] [ 1.454468] pnp 00:07: Plug and Play BIOS device, IDs PNP0c04 (active) [ 1.454798] pnp 00:08: [io 0x6100-0x613f] [ 1.454816] pnp 00:08: [io 0x5f00-0x5f0f] [ 1.454833] pnp 00:08: [io 0x04d0-0x04d1] [ 1.454849] pnp 00:08: [io 0x0cf8-0x0cff] [ 1.454866] pnp 00:08: [io 0x0294-0x0297] [ 1.455088] pnp 00:08: Plug and Play BIOS device, IDs PNP0a03 (active) [ 1.455104] PnPBIOS: 9 nodes reported by PnP BIOS; 9 recorded by driver [ 1.483560] pci 0000:00:01.0: PCI bridge to [bus 01] [ 1.483590] pci 0000:00:01.0: bridge window [io 0xc000-0xcfff] [ 1.483619] pci 0000:00:01.0: bridge window [mem 0xeed00000-0xeedfffff] [ 1.483642] pci 0000:00:01.0: bridge window [mem 0xe6b00000-0xe6bfffff pref] [ 1.483684] pci_bus 0000:00: resource 4 [io 0x0000-0xffff] [ 1.483701] pci_bus 0000:00: resource 5 [mem 0x00000000-0xfffffffff] [ 1.483719] pci_bus 0000:01: resource 0 [io 0xc000-0xcfff] [ 1.483735] pci_bus 0000:01: resource 1 [mem 0xeed00000-0xeedfffff] [ 1.483752] pci_bus 0000:01: resource 2 [mem 0xe6b00000-0xe6bfffff pref] [ 1.484194] NET: Registered protocol family 2 [ 1.486277] tcp_listen_portaddr_hash hash table entries: 512 (order: 0, 6144 bytes) [ 1.486370] TCP established hash table entries: 4096 (order: 2, 16384 bytes) [ 1.486499] TCP bind hash table entries: 4096 (order: 3, 32768 bytes) [ 1.486722] TCP: Hash tables configured (established 4096 bind 4096) [ 1.487123] UDP hash table entries: 256 (order: 1, 8192 bytes) [ 1.487199] UDP-Lite hash table entries: 256 (order: 1, 8192 bytes) [ 1.487734] NET: Registered protocol family 1 [ 1.487803] NET: Registered protocol family 44 [ 1.487854] pci 0000:00:00.0: Limiting direct PCI/PCI transfers [ 1.488032] pci 0000:00:07.2: PCI->APIC IRQ transform: INT D -> IRQ 19 [ 1.488142] PCI: CLS 32 bytes, default 32 [ 1.488902] Unpacking initramfs... [ 6.074902] Freeing initrd memory: 20976K [ 6.074945] PCI-DMA: Using software bounce buffering for IO (SWIOTLB) [ 6.074962] software IO TLB: mapped [mem 0x16000000-0x1a000000] (64MB) [ 6.082796] Initialise system trusted keyrings [ 6.082913] Key type blacklist registered [ 6.083606] workingset: timestamp_bits=14 max_order=17 bucket_order=3 [ 6.105277] zbud: loaded [ 6.106749] pstore: using deflate compression [ 7.084935] clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x4d0bcc23f1, max_idle_ns: 440795205856 ns [ 7.085417] clocksource: Switched to clocksource tsc [ 13.550142] Key type asymmetric registered [ 13.550166] Asymmetric key parser 'x509' registered [ 13.550341] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) [ 13.550843] io scheduler noop registered [ 13.550856] io scheduler deadline registered [ 13.551678] io scheduler cfq registered (default) [ 13.551693] io scheduler mq-deadline registered [ 13.553675] shpchp: Standard Hot Plug PCI Controller Driver version: 0.4 [ 13.553857] intel_idle: does not run on family 6 model 5 [ 13.554345] isapnp: Scanning for PnP cards... [ 13.909492] isapnp: No Plug & Play device found [ 13.910403] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled [ 13.916097] Linux agpgart interface v0.103 [ 13.917233] agpgart-intel 0000:00:00.0: Intel 440LX Chipset [ 13.931948] agpgart-intel 0000:00:00.0: AGP aperture is 64M @ 0xe8000000 [ 13.933568] i8042: PNP: PS/2 Controller [PNP0303] at 0x60,0x64 irq 1 [ 13.933580] i8042: PNP: PS/2 appears to have AUX port disabled, if this is incorrect please boot with i8042.nopnp [ 13.935121] serio: i8042 KBD port at 0x60,0x64 irq 1 [ 13.936491] mousedev: PS/2 mouse device common for all mice [ 13.937133] rtc rtc0: invalid alarm value: 2019-5-21 22:62:30 [ 13.937659] rtc_cmos 00:04: registered as rtc0 [ 13.937885] rtc_cmos 00:04: alarms up to one day, 114 bytes nvram [ 13.938244] ledtrig-cpu: registered to indicate activity on CPUs [ 13.942928] NET: Registered protocol family 10 [ 14.055107] Segment Routing with IPv6 [ 14.055356] mip6: Mobile IPv6 [ 14.055378] NET: Registered protocol family 17 [ 14.055902] mpls_gso: MPLS GSO support [ 14.058256] microcode: sig=0x651, pf=0x1, revision=0x29 [ 14.058821] microcode: Microcode Update Driver: v2.2. [ 14.058856] Using IPI No-Shortcut mode [ 14.058925] sched_clock: Marking stable (14057713765, 1020469)->(14183988558, -125254324) [ 14.060943] registered taskstats version 1 [ 14.060953] Loading compiled-in X.509 certificates [ 15.760453] Loaded X.509 cert 'Debian Secure Boot CA: 6ccece7e4c6c0d1f6149f3dd27dfcc5cbb419ea1' [ 15.760690] Loaded X.509 cert 'Debian Secure Boot Signer: 00a7468def' [ 15.760897] zswap: loaded using pool lzo/zbud [ 15.761440] AppArmor: AppArmor sha1 policy hashing enabled [ 15.763226] rtc_cmos 00:04: setting system clock to 2019-05-21 02:36:05 UTC (1558406165) [ 15.780688] Freeing unused kernel image memory: 880K [ 15.798080] Write protecting the kernel text: 6752k [ 15.798691] Write protecting the kernel read-only data: 2076k [ 15.798752] Run /init as init process [ 16.675094] piix4_smbus 0000:00:07.3: SMBus Host Controller at 0x5f00, revision 0 [ 16.812021] e1000: Intel(R) PRO/1000 Network Driver - version 7.3.21-k8-NAPI [ 16.812035] e1000: Copyright (c) 1999-2006 Intel Corporation. [ 16.812295] e1000 0000:00:12.0: PCI->APIC IRQ transform: INT A -> IRQ 19 [ 16.874559] SCSI subsystem initialized [ 16.918301] 3ware Storage Controller device driver for Linux v1.26.02.003. [ 16.918545] 3w-xxxx 0000:00:0f.0: PCI->APIC IRQ transform: INT A -> IRQ 17 [ 16.963692] usbcore: registered new interface driver usbfs [ 16.963847] usbcore: registered new interface driver hub [ 16.964206] usbcore: registered new device driver usb [ 17.044883] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver [ 17.100093] uhci_hcd: USB Universal Host Controller Interface driver [ 17.100429] uhci_hcd 0000:00:07.2: PCI->APIC IRQ transform: INT D -> IRQ 19 [ 17.100531] uhci_hcd 0000:00:07.2: UHCI Host Controller [ 17.100597] uhci_hcd 0000:00:07.2: new USB bus registered, assigned bus number 1 [ 17.100646] uhci_hcd 0000:00:07.2: detected 2 ports [ 17.100846] uhci_hcd 0000:00:07.2: irq 19, io base 0x0000da00 [ 17.112769] usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 4.19 [ 17.112788] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 17.112803] usb usb1: Product: UHCI Host Controller [ 17.112817] usb usb1: Manufacturer: Linux 4.19.0-5-686-pae uhci_hcd [ 17.112831] usb usb1: SerialNumber: 0000:00:07.2 [ 17.117845] hub 1-0:1.0: USB hub found [ 17.117955] hub 1-0:1.0: 2 ports detected [ 17.288567] e1000 0000:00:12.0 eth0: (PCI:33MHz:32-bit) 00:04:23:e0:09:16 [ 17.288626] e1000 0000:00:12.0 eth0: Intel(R) PRO/1000 Network Connection [ 17.300598] e1000 0000:00:12.0 enp0s18: renamed from eth0 [ 23.253605] scsi host0: 3ware Storage Controller [ 23.260297] 3w-xxxx: scsi0: Found a 3ware Storage Controller at 0xde00, IRQ: 17. [ 23.266809] scsi 0:0:0:0: Direct-Access 3ware Logical Disk 0 1.2 PQ: 0 ANSI: 0 [ 23.389261] sd 0:0:0:0: [sda] 468862128 512-byte logical blocks: (240 GB/224 GiB) [ 23.389368] sd 0:0:0:0: [sda] Write Protect is off [ 23.389388] sd 0:0:0:0: [sda] Mode Sense: 00 00 00 00 [ 23.389934] sd 0:0:0:0: [sda] Write cache: enabled, read cache: disabled, supports DPO and FUA [ 23.396290] sda: sda1 sda2 < sda5 > [ 23.400739] sd 0:0:0:0: [sda] Attached SCSI disk [ 23.754215] PM: Image not found (code -22) [ 23.942763] random: fast init done [ 24.167537] cryptd: max_cpu_qlen set to 1000 [ 24.854773] EXT4-fs (sda1): mounted filesystem with ordered data mode. Opts: (null) [ 25.846221] systemd[1]: Inserted module 'autofs4' [ 25.963217] systemd[1]: systemd 241 running in system mode. (+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD -IDN2 +IDN -PCRE2 default-hierarchy=hybrid) [ 25.964788] systemd[1]: Detected architecture x86. [ 25.981252] systemd[1]: Set hostname to <crumble0>. [ 27.477851] random: systemd: uninitialized urandom read (16 bytes read) [ 27.494867] random: systemd: uninitialized urandom read (16 bytes read) [ 27.495052] systemd[1]: Reached target System Time Synchronized. [ 27.496154] random: systemd: uninitialized urandom read (16 bytes read) [ 27.497675] systemd[1]: Listening on udev Kernel Socket. [ 27.498640] systemd[1]: Listening on initctl Compatibility Named Pipe. [ 27.500200] systemd[1]: Listening on udev Control Socket. [ 27.500545] systemd[1]: Reached target Remote File Systems. [ 27.502280] systemd[1]: Listening on Syslog Socket. [ 27.503755] systemd[1]: Listening on fsck to fsckd communication Socket. [ 28.222652] EXT4-fs (sda1): re-mounted. Opts: errors=remount-ro [ 28.279034] random: crng init done [ 28.279061] random: 7 urandom warning(s) missed due to ratelimiting [ 29.799983] systemd-journald[164]: Received request to flush runtime journal from PID 1 [ 30.352511] sd 0:0:0:0: Attached scsi generic sg0 type 0 [ 32.152329] Adding 522236k swap on /dev/sda5. Priority:-2 extents:1 across:522236k FS [ 32.721332] IPv6: ADDRCONF(NETDEV_UP): enp0s18: link is not ready [ 32.726115] e1000: enp0s18 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX [ 32.726351] IPv6: ADDRCONF(NETDEV_CHANGE): enp0s18: link becomes ready EDIT : $ sudo cat /proc/iomem 00000000-00000fff : Reserved 00001000-0009fbff : System RAM 0009fc00-0009ffff : Reserved 000a0000-000bffff : Video RAM area 000c8000-000c8fff : Adapter ROM 000dc000-000dffff : Reserved 000f0000-000fffff : Reserved 000f0000-000fffff : System ROM 00100000-1fffffff : System RAM 1a000000-1a697dd7 : Kernel code 1a697dd8-1a9452ff : Kernel data 1aa38000-1aaa8fff : Kernel bss e6b00000-e6bfffff : PCI Bus 0000:01 e8000000-ebffffff : 0000:00:00.0 eed00000-eedfffff : PCI Bus 0000:01 ef000000-ef7fffff : 0000:00:0f.0 ef000000-ef7fffff : 3w-xxxx eff40000-eff7ffff : 0000:00:12.0 eff80000-effbffff : 0000:00:12.0 eff80000-effbffff : e1000 effc0000-effdffff : 0000:00:12.0 effc0000-effdffff : e1000 effe0000-effeffff : 0000:00:0f.0 effffff0-efffffff : 0000:00:0f.0 effffff0-efffffff : 3w-xxxx fec00000-fec00fff : Reserved fec00000-fec003ff : IOAPIC 0 fee00000-fee00fff : Local APIC fee00000-fee00fff : Reserved ffff0000-ffffffff : Reserved
The SWIOTLB is being enabled on your system. By default this reserves 64M RAM. It is only supposed to be needed if you have more than 4G RAM, and cannot use a hardware IOMMU, or if you are running under Xen virtualization without nested page tables. Congratulations. You found a bug in the kernel :-). Either of the following boot options should work fine: iommu=off - disable SWIOTLB. swiotlb=1 - reduce SWIOTLB to one "slab" = 128K. Or you can try patching the kernel source code. See below for the patch, and an explanation of the bug that it fixes :-). Problem analysis [ 0.000000] BIOS-provided physical RAM map: [ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable [ 0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000000dc000-0x00000000000dffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved [ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x000000001fffffff] usable Your physical RAM is mapped from 0-512M. We start off with the normal legacy nonsense, but that only affects the area below 1M (0x100000). [ 0.113219] RAMDISK: [mem 0x1e40a000-0x1f885fff] The initial ramdisk occupies about 20M. [ 0.648160] Memory: 419336K/523896K available (6751K kernel code, 660K rwdata, 2068K rodata, 880K init, 452K bss, 104560K reserved, 0K cma-reserved, 0K highmem) But now we have 104M reserved. I think that included the initrd, which is freed later. [ 6.074902] Freeing initrd memory: 20976K I think the bulk of the loss goes here: [ 6.074945] PCI-DMA: Using software bounce buffering for IO (SWIOTLB) [ 6.074962] software IO TLB: mapped [mem 0x16000000-0x1a000000] (64MB) 64M is the default size allocated for bounce buffers. The swiotlb is a bounce-buffering mechanism used with [DMA] devices that cannot access all of a system's memory. The swiotlb code simply allocates a large chunk of low memory early in the bootstrap process; this memory is then handed out in response to DMA allocation requests. In many cases, use of swiotlb memory involves the creation of "bounce buffers," where data is copied between the driver's buffer and the device-accessible swiotlb space. Memory used for the swiotlb is removed from the normal Linux memory management mechanism and is, thus, inaccessible for any use other than DMA buffers. For these reasons, the swiotlb is seen as, at best, inelegant. DMA issues, part 2, LWN.net, 2004. The strange thing about this explanation is it says swiotlb was a workaround for Intel's initial implementation of x86-64. It seems like you have fallen foul of the current delapidated state of x86-32 Linux. Maybe an oversight when Linux "unified" a lot of the x86-32 and x86-64 code? The SWIOTLB initialization code implies it shouldn't be enabled on systems with 4GB or less... but I think it's broken :-D. /* 4GB broken PCI/AGP hardware bus master zone */ #define MAX_DMA32_PFN ((4UL * 1024 * 1024 * 1024) >> PAGE_SHIFT) ... * If 4GB or more detected (and iommu=off not set) or if SME is active * then set swiotlb to 1 and return 1. */ int __init pci_swiotlb_detect_4gb(void) { /* don't initialize swiotlb if iommu=off (no_iommu=1) */ if (!no_iommu && max_possible_pfn > MAX_DMA32_PFN) swiotlb = 1; The problem is 4UL * 1024 * 1024 * 1024 evaluates to 0, when building for 32-bit :-D. The fix should be something like arch/x86/include/asm/dma.h: -#define MAX_DMA32_PFN ((4UL * 1024 * 1024 * 1024) >> PAGE_SHIFT) +#define MAX_DMA32_PFN (4UL * ((1024 * 1024 * 1024) >> PAGE_SHIFT))) I release this kernel patch under GPL2 (and also into the public domain). I have no plan to test or submit it. Do with this knowledge what you will.
82MB of "reserved memory" on 512MB (x86) system
1,558,401,430,000
I' ve just bought new RAM and I'd like to benchmark and compare with my old. How can I do that?
The package hardinfo (http://sourceforge.net/projects/hardinfo.berlios/) is a pretty decent system benchmarker with a nice GUI. The simplest way to compare the two would be to benchmark one save the results and then compare it to your benchmarking of the other. EDIT Depending on your distro, you may already have hardinfo installed, for example on Lubuntu it is called "System Profiler and Benchmark".
How to benchmark RAM memory with a Linux Distro?
1,558,401,430,000
I'm looking to lower my boot time by whatever means possible. I have about 8GB of RAM in my laptop, and if there's any way I could leverage that into faster boot time, that'd be awesome. Is there a way to make the kernel load itself and all modules immediately into RAM to make things faster? Does the Linux kernel already do this?
Answering precisely to the question: Is there a way to speed things up at boot time?. Yes. Welcome to systemd, this is available on RHEL6 onwards, Fedora 15,16 onwards, CentOS 6 onwards. In other worlds of Linux like Ubuntu -- you would have upstart In other world of Unix like Solaris, BSD, MacOSx: you have SMF Both attempt to solve the nature of the booting methods, and try to minimize the amount of time booting sequence takes to start the system into fully functional login ready host. Take a look at systemd -- it is refreshing. Go through these doc links from the author of systemd himself, they are all long and very technical, so take a leisure read when you can. http://0pointer.de/blog/projects/systemd.html http://0pointer.de/blog/projects/on-etc-sysinit.html
Is there a way to speed up boot time by loading things into RAM immediately?
1,558,401,430,000
A monitoring system keeps alerting that my machine is reaching/breaking through its RAM utilization threshold which is 15 GBs. I've done some reading and understood that the apparent RAM utilization is not actual and that the extra RAM is used for caching/buffering of disk I/O operation to improve the performance of the server. I'm running MySQL on that server, that's the only notable service running. So how can I reduce the disk I/O caching/buffering RAM as not to break through the threshold? Could this be a MySQL issue and not Linux's? That's the output of free -gt [root@ipk ~]# free -gt total used free shared buffers cached Mem: 15 15 0 0 0 9 -/+ buffers/cache: 5 10 Swap: 5 0 5 Total: 21 15 6 Linux version is: [root@ipk ~]# uname -rmo 2.6.32-220.el6.x86_64 x86_64 GNU/Linux
Since you don't seem to accept neither our opinions not the various pages we have linked to as 'official', perhaps the official Red Hat documentation will convince you: In this example the total amount of available memory is 4040360 KB. 264224 KB are used by processes and 3776136 KB are free for other applications. Do not get confused by the first line which shows that 28160KB are free! If you look at the usage figures you can see that most of the memory use is for buffers and cache. Linux always tries to use RAM to speed up disk operations by using available memory for buffers (file system metadata) and cache (pages with actual contents of files or block devices). This helps the system to run faster because disk information is already in memory which saves I/O operations. If space is needed by programs or applications like Oracle, then Linux will free up the buffers and cache to yield memory for the applications. If your system runs for a while you will usually see a small number under the field "free" on the first line.
How to reduce buffers/cache
1,558,401,430,000
Is it true that a single application can not allocate more than 2 GiBs even if the system has GiBs more free memory when using a 32-bit x86 PAE Linux kernel? Is this limit loosened by 64-bit x86 Linux kernels?
A 32-bit process has a 32-bit address space, by definition: “32-bit” means that memory addresses in the process are 32 bits wide, and if you have 232 distinct addresses you can address at most 232 bytes (4GB). A 32-bit Linux kernel can only execute 32-bit processes. Depending on the kernel compilation options, each process can only allocate 1GB, 2GB or 3GB of memory (the rest is reserved for the kernel when it's processing system calls). This is an amount of virtual memory, unrelated to any breakdown between RAM, swap, and mmapped files. A 64-bit kernel can run 64-bit processes as well as 32-bit processes. A 64-bit process can address up to 264 bytes (16EB) in principle. On the x86_64 architecture, partly due to the design of x86_64 MMUs, there is currently a limitation to 128TB of address space per process.
How much RAM can an application allocate on 64-bit x86 Linux systems?
1,558,401,430,000
For linux (Ubuntu, Debian, etc), different desktop environments cost different amounts of resources (RAM). Gnome and KDE tend to cost more RAM than others like XFCE / LXDE / LXQT: https://unihost.com/help/how-to-choose-linux-desktop-environment-ram-usage/ I am wondering if I don't log in via the GUI of the desktop environment, and only use ssh to interact with the OS, do the RAM usages by these desktop environments still make a difference? For example, I have a Debian Gnome and a Debian XFCE. After turning on the two machines, I only use SSH to interact with them. In this case, do they use the same amount of RAM?
If there is no GUI session used, but the system still displays a GUI for login, only this GUI login part will use memory. Processes managing this are mostly waiting and thus mostly doing nothing. If swap if enabled (something to ponder if the only disks available are SSD that have to be preserved from wearing off, anyway this answer is not about deciding this), parts of the process(es) doing nothing will be swapped out when more memory becomes needed, thus further limiting the memory footprint of the GUI. To answer the question: using SSH won't affect GUI parts currently in use. The comparison is about the currently loaded and running parts. On a typical Debian installation, choosing Gnome gets GDM (Gnome Display Manager) for GUI login prompt, choosing XFCE gets LightDM (Lightweight Display Manager) for GUI login prompt. I would tend to say that LightDM (and thus XFCE) will use less memory. For both cases, most of it (not all, nor active parts, such as displaying the time) would be swapped out if there is swap, but without it all of it will stay in physical RAM. Testing in a Debian 12 amd64 VM without swap, accessed through SSH and doing nothing else than providing an SSH service and offering a GUI login prompt or none, the used memory measured with free -m on the VM right after echo 3 > /proc/sys/vm/drop_caches was along a few reboots: GDM: ~ 494-506MB LightDM: ~ 324-331MB neither (console-only): ~ 196-214MB I'm sure there can be variations, but overall LightDM, and thus meaning having chosen to install XFCE, appears to use less memory than GDM and thus having chosen to install Gnome. Both were installed, then were switched or disabled as described below and then rebooted. Now you can also use GDM to start XFCE or use LightDM to start Gnome to further murky this point, but I believe both might loose some parts of their integration to their default manager, such as issues when locking screen or switching user which would have to be further tinkered with. On Debian to switch between them (possibly not instantaneously but only for next start) if both are installed that would be either of: dpkg-reconfigure gdm3 dpkg-reconfigure lightdm to get prompted (a reboot or equivalent might be needed after this or some operations below). The best way to not use such memory is to disable completely the start of the GUI: that's what is done on most server-only systems: even when they have a video card and are able still to display on their console output, they are usually set to not display any GUI, among other reasons to spare resources, especially memory. If that's what you intend, you can do it now, without uninstalling anything yet, and still change your mind later. On Debian, for GDM this is described on this Debian wiki: GDM - Controlling the GDM daemon: systemctl set-default multi-user.target This also applies to LightDM. One can still reconsider and enable them back with either: systemctl set-default graphical.target Or start one of them only once without enabling them back at startup: systemctl start gdm systemctl start lightdm
If I don't log in the desktop environments, does the desktop enviroment still cost RAM?
1,558,401,430,000
So I look at avaliable to me servers load and see that some other user has created some really ram intensive app that kills my server hosting abileties. I wonder what is bash command to get top 5 most ram using applications n my server. How would such command look like?
You can use ps: ps axo pid,args,pmem,rss,vsz --sort -pmem,-rss,-vsz | head -n 5
How to get top 5 most ram intensive applications from Bash?
1,558,401,430,000
I'm trying to understand the relation between huge page size and how data is actually being written to RAM. What happens when a process uses a 1GB huge page - does writing occur in 1GB chunks? I guessing I'm completely wrong with this assumption?
There is more than one definition of the chunk size for memory writes. You could consider it to be: the width of the store instruction (store byte, store word, …), typically 1, 2, 4, 8 or 16; the width of a cache line, typically something like 16 or 64 bytes (and different cache levels may have different line widths); the width of the memory bus, which is not directly observable in software; and possibly a few more reasonable senses. None of these are related to a page size. The page size is an attribute of a page in the MMU. The MMU translates virtual addresses (used by programs) into physical addresses (which designate a physical location in memory). The process to translate a virtual address into a physical address goes something like this: Look up the address of the first-level descriptor table. Extract the highest-order bits of the virtual address and use them as an index in the first-level descriptor table. Decode the L1 descriptor at that index, which yields the address of a second-level descriptor table. Extract more bits from the virtual address and use them as an index in the second-level descriptor table. Decode the L2 descriptor at that index, which yields the address of the start of a page. A page is a unit of physically contiguous memory which is described by one entry in the MMU table. Mask the remaining bits of the virtual address with the page start address to get the physical address. Common 32-bit architectures go through two table levels; common 64-bit architectures go through 3. Linux supports up to 4 levels. Some CPU architectures support making some pages larger, going through fewer levels of indirection. This makes accesses faster, and keeps the size of the page tables down, at the expense of less flexibility in memory allocation. The time gain is minimal for most applications, but can be felt with some performance-sensitive applications that don't benefit from the flexibility of small pages, such as databases. Huge pages are pages which go through fewer levels than the normal amount, and are correspondingly larger. The software that is using large pages typically requests them specifically (via flags to mmap, see how is page size determined in virtual address space? for a few more details). After this initial request, it doesn't need to know or care about the page size. In particular, memory accesses are handled by the MMU: software is not involved at the time of access.
1GB huge page - Is writing occurring in 1GB chunks?
1,558,401,430,000
I am running a Debian Jessie and having memory issues when using Google Chrome I tried disabling extensions, disabling cache, flushing the cache, and disabling the web 3d rendering, but nothing really improves. I am getting huge lags some times and I am really wondering where this is coming from.
If you add up MEM% for all the identical looking chrome processes, then you have well over 100%, which is impossible. That's because those are not, in fact, separate processes, they're threads, which share the same memory space. htop shows these by default, but see here for how to change that and get a view that will make more sense to you. Your total used RAM is 1047 of 1727 MB, so you do not have memory problems. When looking at memory stats, keep in mind that virtual memory, more properly: virtual address space, shown here as VIRT is not real memory. It's address space, and most of the addresses aren't used and don't correspond to anything. On linux, the size of this pretend space can be up to 4 GB per process, even if you don't have that much available to start with. A decent metric of the amount of RAM actually consumed is the RSS or resident memory size (in htop's case, RES). If you eliminate threads from the view, you'll see there's actually only one 142 MB google-chrome process (actually there may be a handful of genuinely separate chrome processes, but not dozens). Another significant stat if you are trying to diagnose system performance problems is the amount of CPU time consumed (TIME+), but again nothing looks particularly out of line here WRT chrome.
How to reduce chrome's virtual memory usage?
1,558,401,430,000
I am running grep MemTotal /proc/meminfo to determine the RAM installed on a system, however instead of reporting a number corresponding to an even number of GiB, it is slightly off. I.e. on my 64 GiB system I get a report of 65854272 kiB, which is equivalent to 62.8 GiB. Where did my 1.2 GiB go? Why does the tool not display them to me? free -b reports 67434774528 which is in line with the above.
MemTotal: Total usable RAM in kilobytes (i.e. physical memory minus a few reserved bytes and the kernel binary code) Source: Torvalds linux github repro (linux/Documentation/filesystems/proc.txt) Check BIOS reserved memory: dmesg | grep BIOS | grep reserved
Why am I missing 2% of my ram
1,558,401,430,000
Today I decided to run top on my Arch Linux laptop, to be greeted with this: In particular, this bothers me: GiB Mem :225809113546752.0/7.791 This number doesn't change with the actual memory consumption. Does anyone have any idea why this occurs?
The problem is known and fixed already - top: protect against the anomalous 'Mem' graph display Until this patch, top falsely assumed that there would always be some (small) amount of physical memory after subtracting 'used' and 'available' from the total. But as the issue referenced below attests, a sum of 'used' and 'available' might exceed that total memory amount. The bug was patched a month ago, but the Arch Linux's procps-ng package was built in the 10.07.2016. So, the simple system upgrade won't help in this case. You have two ways to solve this problem at least: Building the last version of the procps-ng from the source. Using htop or another analog for system monitoring.
top showing huge number in place of memory percentage
1,558,401,430,000
Running entirely from RAM been done on various distros such as Slax, DamnSmallLinux, and newer Ubuntu versions, and since I have 8GB it seems reasonable that I could run many distros entirely from RAM (as long as I select one that has the ability). I would like to do this with OpenELEC (or any distro), and with a further complication: I'm a .NET developer, work and primarily use Windows, which means NTFS and FAT32 are my preferred file systems. Until the day comes if and when Windows can natively read/write ext partitions, this won't change. Ext2fck won't even install in Windows 8, so there's no convincing me of the 'merits' of having drives and partitions in an unreadable format to my daily operation. There's also things like syslinux, vmlinuz, extlinux and the like that can load .iso files into RAM and effectively bootload them. To add icing on the cake, Windows' bootloader will allow me to add these as options in the native Windows boot menu, which I have done for UBCD 5.11, and will even work for virtual file systems like .vhd, etc. So, here's my dream: I want to combine all three into one. I want to take an installed ext2/ext3/ext4 partition, in this case an install of OpenELEC, compress it into an .iso, and create an entry in my boot menu that will either do this directly, or pass it to syslinux or the like that will do the following: Extract the .iso completely into RAM as an ext2/3/4 partition and boot into that OS in RAM. I'll then mount my NTFS hard drive for the /STORAGE portion of the OpenELEC install with ntfs-3g. Then, as a bonus, on exit I'd have the system recompress itself to an .iso, and if successful replace the initial .iso thereby persisting my changes across boots (provided the shutdown was successful). It wouldn't have to copy itself from memory, either: it could copy any files/changes it wanted to track (if some were not available) back onto the drive it booted from, if present, then compress that back into an .iso. Slax, DSL and Ubuntu can boot to RAM and persist changes, so I know it's possible if your OS supports it. I'm wondering if this can be made into a 'works for any distro you want'. Slax saves your changes in an ext directory /slax/changes if it's on an ext partition, or as changes.dat otherwise (for NTFS/FAT32). This solution could work too, I suppose, but would probably require more interaction with the hosted OS to coordinate this than using the .iso. So, how close can I get? Are there easy solutions for this already out there? Would I have to write a custom 'SYSLINUX'/'ISOLINUX'/'EXTLINUX'/'VMLINUX? What would be required to make this happen, and if it's possible already how do I get started?
There's an EXE installer for Puppy Linux which boots from an .iso on FAT32, NTFS or Linux filesystems (i.e. ext2/ext3/ext4, xfs, etc.) using syslinux and runs in RAM using unionfs/aufs with full access to persistent storage (disk, SD, flashdrive, etc.). Other ISOs can be mounted, from commandline or script of course, as well as by clicking on them in the included ROX-Filer file manager. One convenient use of this is to selectively access or restore files from an old version instead of having to roll back everything. The original Puppy Linux distribution ISO, which itself is an usually an ext3/4 filesystem, is kept on the lowest layer of the aufs stack. Changes are recorded in the topmost layer and flushed to disk periodically (configurable) to a "savefile". On boot, the original ISO is loaded to RAM and mounted read-only, then the savefile is loaded and mounted read-only to overlay it and an empty read-write layer mounted for any new changes. To preserve a history of changes, just setup automatic or manual coping of the savefile ISO to an archival directory. The O/S "layering" of unionfs/aufs along with multi-mounting of filesystems are the core technologies at work here, so if Puppy Linux doesn't work for you, look for other distros using them. There are quite a number of installation options available for Puppy Linux including a Windows EXE Installer which is a separate package that sets up the Windows boot loader for dual-boot.
Is it possible to run any distro from RAM from within an .iso saved on an NTFS file system?
1,558,401,430,000
I'm able to auto detect RAM in GB as below and round off to the nearest integer: printf "%.f\n" $(grep MemTotal /proc/meminfo | awk '$3=="kB"{$2=$2/1024^2;$3="GB";} 1' | awk '{print $2}') Output: 4 I multiply by 2 to determine the required swap as 8GB ans=`expr $(printf "%.f\n" $(grep MemTotal /proc/meminfo | awk '$3=="kB"{$2=$2/1024^2;$3="GB";} 1' | awk '{print $2}')) \* 2` echo "$ans"G Output: 8G With the below commands I try to create 8GB swap memory. echo "Creating $ans GB swap memory" sudo dd if=/dev/zero of=/swapfile bs="$ans"G count=1048576 sudo chmod 600 /swapfile sudo mkswap /swapfile sudo swapon /swapfile sudo swapon --show However, I get the below error: Creating 8 GB swap memory dd: memory exhausted by input buffer of size 8589934592 bytes (8.0 GiB) mkswap: error: swap area needs to be at least 40 KiB swapon: /swapfile: read swap header failed. Can you please suggest and help me auto-create swap memory which Ideally should be double of that of the RAM. System details: root@DKERP:~# uname -a Linux DKERP 5.4.0-124-generic #140-Ubuntu SMP Thu Aug 4 02:23:37 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux root@DKERP:~# free -g -h -t total used free shared buff/cache available Mem: 3.8Gi 1.0Gi 207Mi 54Mi 2.6Gi 2.5Gi Swap: 0B 0B 0B Total: 3.8Gi 1.0Gi 207Mi
The reason why your dd command didn't work is because you set dd's block size to 8 GB. i.e. you told it to read and write 8 GiB at a time, which would require a RAM buffer of 8 GB. As Marcus said, 8 GiB is more RAM than you have, so a buffer of that size isn't going to work. And ~ 8 billion megabytes (8 GiB x 1M = 8 petabytes, 9,007,199,254,740,992 bytes) is way more disk space than you have too....it's way more than most high-end storage clusters in the world would have. It would work if you used reasonable values for both bs and count. For example, 1 MiB x 8K = 8 GiB: dd if=/dev/zero of=/swapfile bs=1048576 count=8192 or dd if=/dev/zero of=/swapfile bs=1M count=8K
Auto detect RAM and create double the swap memory
1,558,401,430,000
About RAM for laptops I did realize that is available ECC Non-ECC Buffered Unbuffered It according with: Kingston Technology KVR16LS11/8 8GB 1600MHz DDR3L (PC3-12800) 1.35V Non-ECC CL11 SODIMM Intel Laptop Memory A-Tech 4GB DDR2 800MHz SODIMM PC2-6400 1.8V CL6 200-Pin Non-ECC Unbuffered Laptop RAM Memory Upgrade Module That options appears on newegg it for Laptop Memory (only about ECC) and Server Memory (about ECC and Buffered/Registered) Question(s) With what command or commands is possible to know If the current RAM installed by slot is ECC/Non-ECC and Buffered/Unbuffered? Observation(s) In the Ram's box and in the ram itself, there are no indications of these 2 features. Furthermore for some old models, based on DDR2, does not exist this information in the Web. Goal The purpose is do a check if the current RAM installed is the correct and do the correct upgrade of RAM.
Run dmidecode -t memory. Handle 0x001A, DMI type 17, 40 bytes Memory Device Array Handle: 0x0019 Error Information Handle: Not Provided Total Width: 64 bits Data Width: 64 bits Size: 16384 MB Form Factor: DIMM If total Width > Data Width the stick is ECC: Handle 0x004D, DMI type 17, 34 bytes Memory Device Array Handle: 0x004C Error Information Handle: Not Provided Total Width: 72 bits Data Width: 64 bits Size: 8 GB Form Factor: DIMM Set: None Locator: DIMMA1 Bank Locator: P0_Node0_Channel0_Dimm0 Type: DDR3 Type Detail: Synchronous Speed: 1600 MT/s Manufacturer: Samsung Part Number: M391B1G73BH0-CK0 Rank: 2 Configured Memory Speed: 1600 MT/s I'm unsure what you mean by buffered.
Command(s) to know If the current RAM installed by slot is ECC/Non-ECC and Buffered/Unbuffered
1,558,401,430,000
I have read many controversal statements about ZFS on low memory systems on the internet, but most of the use cases was for performant data storage. I want to use ZFS not for performance reasons, but because it supports transparent compression and deduplication (the latter may be optional) and still seems to be more mature than BTRFS. I don't want to use any RAID configuration. I want to use it on a laptop computer, for root and home file system, and storage space and data safety (recoverability after power loss or other random inconsistencies, very low risk of corruption due to low RAM, etc.) is more important than disk performance. I want comparable safety as ext2/3/4 give. I would like to use ext4 ontop of a ZVOL. So, the questions are: Can ZFS be configured to work reliably with "low RAM" if IO performance/ caching is not of concern, and no RAID funtionality is wanted? How does the RAM needed change if I do not use ZFS as a filesystem itself, but just use ZVOLs where I put another filesystem ontop? How does RAM needed change with deduplication turned on? If deduplication is turned on and RAM starts to get low, is it still safe -- can ZFS just suspend deduplication and use less RAM? Is it possible to deactivate automatic deduplication, but run it from time to time manually? Can ext4 ontop of a ZVOL reliably store my data even on low RAM situations, and if inconsistencies happen, success chances for repairs are high (as it is with ext2/3/4)? Does ext4 ontop of a ZVOL increase rubustness because it adds ext4's robustness, or is data as robust as the underlying ZVOL is? System specs: Linux 8 GiB RAM (shared with graphics card), but most (at least 7 GiB) of it should be available for user space software, about 700 GiB SSD storage to use for the ZFS, maybe on another system 128 GiB of eMMC to use for ZFS. Current disk usage (du -sh of the bigger directories at /) (/ is ext4, /var mounted ontop is reiserfs) (want to move that to a storage with transparent compression): 74M /etc 342G /home 5.0G /opt 1.5G /root 261M /tmp 35G /usr 30G /var OR, just use BTRFS (have read that severe/ hard to recover data loss can occur due to "bugs", but that is all controversial ...)?
Short answer: Yes, its possible to use low RAM (~ 1 GB) with ZFS successfully. You should not use dedup, but RAID and compression is usually ok. Once you have duplication enabled, it works for all newly written data and you cannot easily get rid of it. You cannot enable dedup retroactive, because it works on online data only. Your idea is needlessly complex for no good reason, so I would recommend to just use ZFS and call it a day. Long answer: Can ZFS be configured to work reliably with "low RAM" if IO performance/ caching is not of concern, and no RAID funtionality is wanted? Yes, even with RAID features enabled. You need much less than people claim on the net, for example look at this guy who runs a speedy file server with FreeBSD, 2 cores and 768 MB virtualized. Or have a look at the SolarisInternals Guide (currently only available through archive.org), where 512 MB is mentioned as the bare minimum, 1 GB as minimum recommendation and 2 GB as a full recommendation. I would stay away from dedup, though. Not because it is slow because of paging memory, but because you cannot go back to non-dedup if your system grinds to a halt. Also, its a trade between RAM and disks, and on a budget system you have neither, so you will gain not much. How does the RAM needed change if I do not use ZFS as a filesystem itself, but just use ZVOLs where I put another filesystem ontop? You would need additional memory for the second filesystem and for the layer above ZFS, depending on how you plan to access it (virtualization like KVM, FUSE, iSCSI etc.) How does RAM needed change with deduplication turned on? If deduplication is turned on and RAM starts to get low, is it still safe -- can ZFS just suspend deduplication and use less RAM? You cannot suspend deduplication, but your data is still safe. There will be a lot of memory swapping and waiting, so it might not be very usable. Deduplication is online, so to disable it, you would need to turn dedup off and write all data again (which is essentially copying all data to a new filesystem and destroying the old one). Is it possible to deactivate automatic deduplication, but run it from time to time manually? No, because it does not affect data at rest. If you have dedup on and want to write a block, ZFS looks if it is present in the dedup table. If yes, then the write is discarded and a reference is added to the dedup table. If no, it is written and the first reference is added. This means that your old data is not affected by dedup, and turning it on without writing any new block does nothing reagarding the used size of the old data. Can ext4 ontop of a ZVOL reliably store my data even on low RAM situations, and if inconsistencies happen, success chances for repairs are high (as it is with ext2/3/4)? Does ext4 ontop of a ZVOL increase rubustness because it adds ext4's robustness, or is data as robust as the underlying ZVOL is? In my eyes this is needless complexity, as you would get no new features (like in the reverse case with ext4 below and ZFS on top, e. g. snapshots), and additionally get some new responsibilities like fsck and more fdisk formatting exercises. The only use case where I would do something like that is if had a special application that demands a specific file system's low-level features or has hard-coded assumptions (fortunately, that behavior seems to have died in recent times).
Reliability of ZFS/ ext4 on ZVOL, used not for performance but for transparent compression, on low memory system?
1,558,401,430,000
I installed 16GB of RAM on a motherboard that shouldn't take it. Should I buy a better motherboard or change anything at all? It seems to be working fine. Memory: Crucial Ballistix Sport "(8GBx2) DDR3 PC3-12800" Board: Asrock N68C-S UCC "Max. capacity of system memory: 8GB" Does gnome-control-center.real info lie? Memory: 15,7GB Does dmidecode -t16 say my board can take 2x 8GB or 8GB total? Maximum Capacity: 8 GB Number Of Devices: 2 Does free -h lie saying 11 out of 15GB is used? total used free shared buffers cached Mem: 15G 11G 4,2G 7,8G 140M 9,3G Shouldn't this output of dmidecode -t 17 say 1600 MHz speed? Handle 0x0010, DMI type 17, 27 bytes Size: 8192 MB Speed: 400 MHz Handle 0x0012, DMI type 17, 27 bytes Size: 8192 MB Speed: 400 MHz
The short story: If your mobo posts, and your system boots, and free/top show your ram as 16 gB, then it works. Even mobo makers can under-report capacity of system boards, so the real test is if ram is installed correctly, matched correctly, runs, ie, boots, and runs with stability, ie, doesn't crash, then it works. You can also test by trying to use all your memory for something or other, and seeing if the system remains stable. Because you got very good ram, crucial, it's quite possible that lower grade ram might not have worked at 16gB. That can be why they don't say it supports 16gB but opt for the more conservative 8gB. Your tools, like free, top, that report real memory of system, are not lying, that is the usable memory the kernel has access to. Tools that read dmi data do lie, because dmi lies randomly based on the companies who filled that data out. Does gnome-control-center.real info lie? Memory: 15,7GB No, it is telling you the truth. Does dmidecode -t16 say my board can take 2x 8GB or 8GB total? Maximum Capacity: 8 GB Number Of Devices: 2 It says 8gB total. You can see it clearly when looking at a sample type 16 (mine, in this case). The capacity refers to the capacity of the array. This is a single memory array. This array has an alleged (though false in your case) capacity of 8gB (correct in my case), and in my case, it has 4 devices. In your case it has 2 devices. Note that you cannot deduce the overall capacity by the max stick you can use in one slot, unfortunately. That is, you could have 4 slots, with an 8gB capacity, but a 4gB per slot max, which would mean you could use either 4x2gG sticks, or 2x4gB, but not 4x4gB. Handle 0x0012, DMI type 16, 15 bytes Physical Memory Array Location: System Board Or Motherboard Use: System Memory Error Correction Type: None Maximum Capacity: 8 GB Error Information Handle: Not Provided Number Of Devices: 4 Does free -h lie saying 11 out of 15GB is used? No, free is telling you the truth. top will tell you the same truth (though the question of what the kernel considers free or not free is highly arcane, and varies with the implementations of these tools, but that veers far off topic of this question). This is the kernel reporting to userland what ram it has access to, and what is used. Shouldn't this output of dmidecode -t 17 say 1600 MHz speed? It depends on your system. And on how dmidecode is interpreting the data. I'm rusty on this part of the question. The long story: Since I had to deal pretty heavily with ram reporting issues, I had to discover the variance in quality of the dmidecode ram data reports. Note that this is NOT the fault of dmidecode, since its job is to report the dmi data, not to interpret it or correct it. First: dmidecode essentially reports two sets of data: 1: some data that someone filled out, that is, a low paid drone at the motherboard vendor has a form to fill out, and either doesn't bother doing it right, or does it right for one model, and then just copies over that data to another. 2: real data, like whether a ram slot has ram in it, it's size, type, speed, etc. So in the case of system board ram capacity, dmidecode is NOT telling you the capacities based on any actual technical specifications available to dmidecode when it runs. What it's doing is repeating the data that the aforementioned underpaid person was told to fill out to check some box prior to shipping the hardware. Some mobo vendors supply this data perfectly, and you can fully trust their statements. Others offer completely nonsensical statements, which leads dmidecode to correctly report 4x2gB ram installed, but a capacity of 4gB. For example, dmidecode will I believe almost always, if not always, tell you the exactly correct information about your installed ram, quite accurately, but the dmi data will then often contain wrong data about capacity. When I had to deal with this issue, I always used the per stick reporting as authoritative, and I always let it override dmidecode data about actual capacity, because the latter is not real data. # can be true, false, totally off, or pure fiction re capacity # the rest of the data is usually pretty good though dmidecode -t 5 # extremely accurate and reliable, per stick information. Trust it. dmidecode -t 6 # same as 5, might be right re maximum capacity, might not be dmidecode -t 16 # extremely accurate, can trust it, but can't learn max dmidecode -t 17 capacity. Basically it depends on the motherboard vendor, did they complete the data fields that 5 and 16 use correctly? I'll give you an example that clearly shows the fields they didn't feel like filling out. Handle 0x001A, DMI type 17, 27 bytes Memory Device Array Handle: 0x0012 Error Information Handle: Not Provided Total Width: 64 bits Data Width: 72 bits Size: 2048 MB Form Factor: DIMM Set: None Locator: DIMM3 Bank Locator: BANK3 Type: DDR2 Type Detail: Synchronous Speed: 400 MHz Manufacturer: Manufacturer3 Serial Number: SerNum3 Asset Tag: AssetTagNum3 Part Number: PartNum3 You see this all through dmi data, and inside of /sys, data that was not filled out, half filled out by the vendors, or filled out wrong. The items after speed were not filled out right. My personal favorite is this, which is far more common internally than you'd think: [Field Name]: To be filled by O.E.M. You'd think that in this day and age there would be something that actually tells systems exactly what it is, but that's sadly not the case. I could show you hundreds of instances of machine dmidecode data that demonstrate this issue, but really you only have to see one or two. I tend to think that the better mobo makers tend to fill out their dmi data sets better, and the lower end ones tend to not do that, but there's no hard and fast rule about it. As a basic rule, this is the information you can trust from dmidecode and ram: DMI type 5 # Almost nothing in there except some generic information Error Detecting Method: 64-bit ECC Error Correcting Capabilities:: None Associated Memory Slots: 4 Enabled Error Correcting Capabilities: None DMI type 6 Socket Designation: DIMM3 Current Speed: 167 ns Installed Size: 2048 MB (Double-bank Connection) Enabled Size: 2048 MB (Double-bank Connection) Error Status: OK # probably DMI type 16 Number Of Devices: 4 DMI type 17 Data Width: 72 bits Size: 2048 MB Locator: DIMM0 Bank Locator: BANK0 Type: DDR2 Type Detail: Synchronous # usually anyway From Gilles, in comments: Another reason why dmidecode might underreport the maximum capacity is when X GB sticks didn't exist yet when the board was manufactured (or the board manufacturer didn't bother to test with them for some reason), so the board documents Y GB as the maximum with Y < X, but when X GB sticks appear they turn out to work. The key is to realize that the max capacity that dmidecode reports the memory array having is not calculated, it's just some data someone entered when they create the dmi table for the mobo. I generally trust the vendor mobo documentation over the dmi data, but as this poster discovered, even that's not reliable.
Does my system use all available RAM?
1,558,401,430,000
I've heard about the Scrub of Death. However one can disable checksumming in ZFS datasets. If so, will that make the situation safer for a system that's not using ECC RAM? I'm not thinking of a NAS or anything like that - more of a workstation deployment with a single drive just to use the ZFS volume management and snapshots (and no need for fsck) benefits. I don't want to use redundancy even. Will a bad memory location still completely destroy my storage if I disable ZFS checksums?
I've heard about the Scrub of Death. You should read this: http://jrs-s.net/2015/02/03/will-zfs-and-non-ecc-ram-kill-your-data/ Unless the memory in your system is absolute trash, it will almost certainly have fewer problems than your disks. If your system has an SSD and a "slow" CPU, the performance hit from calculating the checksum data will be negligible. My personal opinion on this is that, unless your CPU is 100% in use the majority of time (and sometimes even then), it's best to just let ZFS use checksums. I feel like there's much confusion in this topic. There is. Unfortunately, I don't have a better answer. If you ask this question on the ZFS on Linux mailing list, you'll get a much more detailed answer.
Is ZFS safer with non-ECC RAM if you disable checksums?
1,558,401,430,000
This is a follow up question from Sort large CSV files (90GB), Disk quota exceeded. So now I have two CSV files sorted, as file1.csv and file2.csv each CSV file has 4 columns, e.g. file 1: ID Date Feature Value 01 0501 PRCP 150 01 0502 PRCP 120 02 0501 ARMS 5.6 02 0502 ARMS 5.6 file 2: ID Date Feature Value 01 0501 PRCP 170 01 0502 PRCP 120 02 0501 ARMS 5.6 02 0502 ARMS 5.6 Ideally, I want to diff the two files in such a way that if two rows in the two files have the same ID, Date and Feature, but different values, then output something like: ID Date Feature Value1 Value2 Of course, this might be asking too much. Something like ID1 Date1 Feature1 Value1 ID2 Date2 Feature2 Value2 also works. In the above example, I would like to output 01 0501 PRCP 150 170 or 01 0501 PRCP 150 01 0501 PRCP 150 I think the main question is how to compare in such a way and how to output to a csv file. Thanks. Sample output from Gilles answer: The output from comm is $ head -20 comm_output.txt ACW00011604,19490101,PRCP,0 AE000041196,20070402,TAVG,239 AE000041196,20070402,TAVG,244 AE000041196,20080817,TMIN,282 AE000041196,20130909,TAVG,350 AE000041196,20130909,TMAX,438 AE000041196,20130909,TMIN,294 AE000041196,20130910,TAVG,339 AE000041196,20130910,TAVG,341 AE000041196,20150910,TAVG,344 The output of awk is $ head awk_output.csv , ACW00011604,19490101,PRCP,0,,, AE000041196,20070402,TAVG,239,,, AE000041196,20070402,TAVG,244,,, AE000041196,20080817,TMIN,282,,, AE000041196,20130909,TAVG,350,,, AE000041196,20130909,TMAX,438,,, AE000041196,20130909,TMIN,294,,, AE000041196,20130910,TAVG,339,,, AE000041196,20130910,TAVG,341,,, AE000041196,20150910,TAVG,344,,, Here is the sample input, if you insist head file1.csv ACW00011604,19490101,PRCP,0 ACW00011604,19490101,SNOW,0 ACW00011604,19490101,SNWD,0 ACW00011604,19490101,TMAX,289 ACW00011604,19490101,TMIN,217 ACW00011604,19490102,PRCP,30 ACW00011604,19490102,SNOW,0 ACW00011604,19490102,SNWD,0 ACW00011604,19490102,TMAX,289 ACW00011604,19490102,TMIN,228 head file2.csv ACW00011604,19490101,SNOW,0 ACW00011604,19490101,SNWD,0 ACW00011604,19490101,TMAX,289 ACW00011604,19490101,TMIN,217 ACW00011604,19490102,PRCP,30 ACW00011604,19490102,SNOW,0 ACW00011604,19490102,SNWD,0 ACW00011604,19490102,TMAX,289 ACW00011604,19490102,TMIN,228 ACW00011604,19490102,WT16,1
Let's review tools that combine two files together line by line in some way: paste combines two files line by line, without paying attention to the contents. comm combines sorted files, paying attention to identical lines. This can weed out identical lines, but subsequently combining the differing line would require a different tool. join combines sorted files, matching identical fields together. sort can merge two files. awk can combine multiple files according to whatever rules you give it. But with such large files, you're likely to get best performance by using the most appropriate special-purpose tools than with generalist tools. I'll assume that there are no duplicates, i.e. within one files there are no two lines with the same ID, date and feature. If there are duplicates then how to cope with them depends on how you want to treat them. I also assume that the files are sorted. I also assume that your shell has process substitution, e.g. bash or ksh rather than plain sh, and that you have GNU coreutils (which is the case on non-embedded Linux and Cygwin). I don't know if your separators are whitespace or tabs. I'll assume whitespace; if the separator is always exactly one tab then declaring tab as the separator character (cut -d $'\t', join -t $'\t', sort -t $'\t') and using \t instead of [ \t]\+ should squeeze a tiny bit of performance. Set the locale to pure ASCII (LC_ALL=C) to avoid any performance loss related to multibyte characters. Since join can only combine rows based on one field, we need to arrange for fields 1–3 to appear as a single field. To do that, change the separator, either between 1 and 2 and 2 and 3 or between 3 and 4. I'll change 1–3 to use ; instead of whitespace. That way you get all the line combinations, whether they're identical or not. You can then use sed to remove lines with identical values. join -a 1 -a 2 <(sed 's/[ \t]\+/;/; s/[ \t]\+/;/' file1.csv) <(sed 's/[ \t]\+/;/; s/[ \t]\+/;/' file2.csv) | sed '/[ \t]\(.*\)[ \t]\+\1$/d' | tr ';' '\t' Note that unpairable lines end up as a 4-column line with no indication as to whether they came from file 1 or file 2. Remove -a 1 -a 2 to suppress all unpairable lines. If you have a majority of identical lines, this wastes time joining them and weeding them out. Another approach would be to use comm -3 to weed out the identical lines. This produces a single output stream where the lines are in order, but lines from file 2 have a leading tab. You can then use awk to combine consecutive lines where the two files have the same fields 1–3. Since this involves awk, it might well end up being slower if there are a lot of non-identical lines. comm -3 file1.csv file2.csv | awk ' $1 "\t" $2 "\t" $3 == k { if ($4 != v) print k "\t" v "\t" $4; next; } { print k "\t" v } { k=$1 "\t" $2 "\t" $3; v=$4; } '
diff two large CSV files (each 90GB) and output to another csv
1,558,401,430,000
Can anybody clarify how support for large address aware (LAA) for 32-bit applications works in Wine? I know that by default in Windows, 32-bit applications are limited to a maximum of 2GB of RAM; however, it is possible to set an LAA flag on the executable, to allow it to use up to 4GB. My understanding is that, by default, Wine respects this 2GB limit for 32-bit Windows applications and it will allow 4GB to be used, if the LAA flag is set on the .exe. However, I have heard that there is also a global option that can be set for Wine to automatically allow 4GB for all 32-bit Windows applications - LARGE_ADDRESS_AWARE=1 (or something like that?). Can someone please clarify if that is the correct environment variable? Does it work in vanilla Wine, or just in Wine-staging? Is it also required when running 32-bit applications in a 64-bit Wine prefix? I thought there was a Wine user guide page about it, but I have been unable to find it.
There is a patch that you can install for each x86 application you are trying to run under WINE which you can find here: https://ntcore.com/?page_id=371 Additionally, there is a patch for WINE for setting the LAA flag in PE files. Taking a look at the contents of the files included in the github, it appears you are correct that the variable you are looking for is LARGE_ADDRESS_AWARE https://github.com/randomstuff/pe-set-laa. According to the creator of this patch, it will work under WINE proper. Depending on what you are trying to run (Games or Portable Executables) it seems there is some inconsistency regarding the efficacy of the flag with certain applications. If you want to build Wine from source, you can also use this code to patch LAA on globally. diff --git a/dlls/kernel32/heap.c b/dlls/kernel32/heap.c index cac73ec..fb214b9 100644 --- a/dlls/kernel32/heap.c +++ b/dlls/kernel32/heap.c @@ -1423,6 +1423,7 @@ VOID WINAPI GlobalMemoryStatus( LPMEMORYSTATUS lpBuffer ) /* values are limited to 2Gb unless the app has the IMAGE_FILE_LARGE_ADDRESS_AWARE flag */ /* page file sizes are not limited (Adobe Illustrator 8 depends on this) */ +/* if (!(nt->FileHeader.Characteristics & IMAGE_FILE_LARGE_ADDRESS_AWARE)) { if (lpBuffer->dwTotalPhys > MAXLONG) lpBuffer->dwTotalPhys = MAXLONG; @@ -1430,7 +1431,7 @@ VOID WINAPI GlobalMemoryStatus( LPMEMORYSTATUS lpBuffer ) if (lpBuffer->dwTotalVirtual > MAXLONG) lpBuffer->dwTotalVirtual = MAXLONG; if (lpBuffer->dwAvailVirtual > MAXLONG) lpBuffer->dwAvailVirtual = MAXLONG; } - +*/ /* work around for broken photoshop 4 installer */ if ( lpBuffer->dwAvailPhys + lpBuffer->dwAvailPageFile >= 2U*1024*1024*1024) lpBuffer->dwAvailPageFile = 2U*1024*1024*1024 - lpBuffer->dwAvailPhys - 1; diff --git a/dlls/ntdll/virtual.c b/dlls/ntdll/virtual.c index 4d4bc3b..2c2264c 100644 --- a/dlls/ntdll/virtual.c +++ b/dlls/ntdll/virtual.c @@ -1845,7 +1845,7 @@ void virtual_set_large_address_space(void) { IMAGE_NT_HEADERS *nt = RtlImageNtHeader( NtCurrentTeb()->Peb->ImageBaseAddress ); - if (!(nt->FileHeader.Characteristics & IMAGE_FILE_LARGE_ADDRESS_AWARE)) return; + // if (!(nt->FileHeader.Characteristics & IMAGE_FILE_LARGE_ADDRESS_AWARE)) return; /* no large address space on win9x */ if (NtCurrentTeb()->Peb->OSPlatformId != VER_PLATFORM_WIN32_NT) return;
How does large address aware (LAA) work in Wine?
1,558,401,430,000
Possible Duplicate: Correctly determining memory usage in Linux I see that almost all my RAM is in use. Is this bad? Strange thing is I don't see what is actually using the RAM.
No problem in that. Linux is borrowing the RAM for caching. This is desirable (RAM is faster than disk) and absolutely normal behaviour. From that link: Why does top and free say all my ram is used if it isn't? This is just a misunderstanding of terms. Both you and Linux agree that memory taken by applications is "used", while memory that isn't used for anything is "free". To see how much RAM you have free, type free -m and look at the -/+ buffers/cache line. In my machine, for example: $ free -m total used free shared buffers cached Mem: 5868 4031 1836 0 282 2260 -/+ buffers/cache: 1489 4379 Swap: 6143 0 6143 Thus I'm using about 1.5 GB RAM, not 4 GB as the first line might make it look like.
Shouldn't there be more RAM free than this? [duplicate]
1,558,401,430,000
Situation: I've got a larger (>10GB) read-only collection of small files with loads of duplicates that I need to have available on multiple machines, even on different file systems. We can assume Linux kernel > 5.3.0. One solution would be to put these into a squashfs image file, use deduplication and zstd compression when creating it, and mount that. Now, this can only work out for me if mounting doesn't mean that all files need to fit in RAM. Is mounting a compressed squashfs file system like that always a decompress-fully-to-RAM business?
Mounting a squashfs file system doesn’t involve decompressing it into memory; decompression is done on the fly, as necessary. There is a small internal cache to avoid repeatedly decompressing the same data, but that’s all. squashfs file systems can store up to 264 bytes of data, so it wouldn’t be practical to decompress them fully on mount.
Does mounting squashfs put the whole filesystem in RAM?
1,558,401,430,000
I have bought new RAM, and it's not detected. In short I got new RAM, 16 GB, to switch my old one, 4 GB + 4 GB. New one isn't detected by my laptop OS(?)/software(?). But when I installed it, it didn't work, I got only 4 GB. Long one I got new RAM, 16 GB, to switch my old one, 4 GB + 4 GB, so it will be 20 GB. But when I installed it, it didn't work, what I mean is I opened System Monitor and it showed (and shows) only 4 GB. But the things is, some programs/utils cat detect it here I will paste all the output of the commands I've tried inxi -Fxz System: Host: lmde Kernel: 4.8.0-53-generic x86_64 (64 bit gcc: 5.4.0) Desktop: Cinnamon 3.4.3 (Gtk 3.18.9-1ubuntu3.3) Distro: Linux Mint 18.2 Sonya Machine: System: LENOVO product: 20250 v: Lenovo Z710 Mobo: LENOVO model: Durian 7A1 v: 31900004Std Bios: LENOVO v: 7FCN35WW date: 09/02/2013 CPU: Quad core Intel Core i7-4700MQ (-HT-MCP-) cache: 6144 KB flags: (lm nx sse sse2 sse3 sse4_1 sse4_2 ssse3 vmx) bmips: 19156 clock speeds: max: 2400 MHz 1: 2400 MHz 2: 2147 MHz 3: 2350 MHz 4: 2400 MHz 5: 2400 MHz 6: 2400 MHz 7: 2399 MHz 8: 2400 MHz Graphics: Card-1: Intel 4th Gen Core Processor Integrated Graphics Controller bus-ID: 00:02.0 Card-2: NVIDIA GK107M [GeForce GT 745M] bus-ID: 01:00.0 Display Server: X.Org 1.18.4 drivers: intel (unloaded: fbdev,vesa) FAILED: nouveau Resolution: [email protected] GLX Renderer: Mesa DRI Intel Haswell Mobile GLX Version: 3.0 Mesa 12.0.6 Direct Rendering: Yes Audio: Card-1 Intel 8 Series/C220 Series High Definition Audio Controller driver: snd_hda_intel bus-ID: 00:1b.0 Card-2 Intel Xeon E3-1200 v3/4th Gen Core Processor HD Audio Controller driver: snd_hda_intel bus-ID: 00:03.0 Sound: Advanced Linux Sound Architecture v: k4.8.0-53-generic Network: Card-1: Intel Wireless 7260 driver: iwlwifi bus-ID: 07:00.0 IF: wlp7s0 state: up mac: <filter> Card-2: Qualcomm Atheros QCA8171 Gigabit Ethernet driver: alx port: 3000 bus-ID: 08:00.0 IF: enp8s0 state: down mac: <filter> Drives: HDD Total Size: 1240.3GB (6.3% used) ID-1: /dev/sda model: ST1000LM014 size: 1000.2GB ID-2: /dev/sdd model: ADATA_SP580 size: 240.1GB Partition: ID-1: / size: 220G used: 20G (10%) fs: ext4 dev: /dev/sdd2 RAID: No RAID devices: /proc/mdstat, md_mod kernel module present Sensors: System Temperatures: cpu: 59.0C mobo: 59.0C gpu: 45.0 Fan Speeds (in rpm): cpu: N/A Info: Processes: 294 Uptime: 1:24 Memory: 2301.1/3863.2MB Init: systemd runlevel: 5 Gcc sys: 5.4.0 Client: Shell (zsh 5.1.1) inxi: 2.2.35 sudo lshw -short -C memory (pasting with sudo so others cat just copy-paste) H/W path Device Class Description ============================================================ /0/0 memory 128KiB BIOS /0/4/b memory 32KiB L1 cache /0/4/c memory 256KiB L2 cache /0/4/d memory 6MiB L3 cache /0/a memory 32KiB L1 cache /0/2a memory 20GiB System Memory /0/2a/0 memory 16GiB SODIMM DDR3 Synchronous 1600 MHz (0,6 ns) /0/2a/1 memory DIMM [empty] /0/2a/2 memory 4GiB SODIMM DDR3 Synchronous 1600 MHz (0,6 ns) /0/2a/3 memory DIMM [empty] sudo lshw -class memory *-firmware description: BIOS vendor: LENOVO physical id: 0 version: 7FCN35WW date: 09/02/2013 size: 128KiB capacity: 4032KiB capabilities: pci upgrade shadowing cdboot bootselect edd int13floppynec int13floppytoshiba int13floppy360 int13floppy1200 int13floppy720 int13floppy2880 int9keyboard int10video acpi usb biosbootspecification uefi *-cache:0 description: L1 cache physical id: b slot: L1 Cache size: 32KiB capacity: 32KiB capabilities: synchronous internal write-back instruction *-cache:0 description: L1 cache physical id: b slot: L1 Cache size: 32KiB capacity: 32KiB capabilities: synchronous internal write-back instruction configuration: level=1 *-cache:1 description: L2 cache physical id: c slot: L2 Cache size: 256KiB capacity: 256KiB capabilities: synchronous internal write-back unified configuration: level=2 *-cache:2 description: L3 cache physical id: d slot: L3 Cache size: 6MiB capacity: 6MiB capabilities: synchronous internal write-back unified configuration: level=3 *-cache description: L1 cache physical id: a slot: L1 Cache size: 32KiB capacity: 32KiB capabilities: synchronous internal write-back data configuration: level=1 *-memory description: System Memory physical id: 2a slot: System board or motherboard size: 20GiB *-bank:0 description: SODIMM DDR3 Synchronous 1600 MHz (0,6 ns) product: CT204864BF160B.C16 vendor: Unknown physical id: 0 serial: A4205EAD slot: DIMM0 size: 16GiB width: 64 bits clock: 1600MHz (0.6ns) *-bank:1 description: DIMM [empty] product: Empty vendor: Empty physical id: 1 serial: Empty slot: DIMM1 *-bank:2 description: SODIMM DDR3 Synchronous 1600 MHz (0,6 ns) product: M471B5173BH0-YK0 vendor: Samsung physical id: 2 serial: 136B8093 slot: DIMM2 size: 4GiB width: 64 bits clock: 1600MHz (0.6ns) *-bank:3 description: DIMM [empty] product: Empty vendor: Empty physical id: 3 serial: Empty slot: DIMM3 sudo dmidecode free -m total used free shared buff/cache available Mem: 3863 2406 277 430 1178 696 Swap: 0 0 0 cat /proc/cmdline BOOT_IMAGE=/boot/vmlinuz-4.8.0-53-generic root=UUID=91af3ab8-8c93-40ef-930a-2dc7038f2dfc ro elevator=deadline quiet splash vt.handoff=7 dmesg | grep -i memory did but it's very long Also: -I booted to BIOS and it showed that there is 20 GB (but it said in MB, something like 20480 MB) -I visited Intel page for my processor(google for Intel® Core™ i7-4700MQ Processor can't paste links, have newbie restrictions) for my CPU and it said that it does support 32 GB -I booted to Windows 10 live CD and it show that it has 20 GB but only 4 GB is available -I did memtest86 here are the screenshot with results what I don't like about it is that in the top left corner it shows that [c]Memory: 4009 MB[/c]. So was the 16 GB detected? -I booted to Linux Mint live CD and it showed exactly the same as current version (4 GB). -I found on internet that it can be caused by that contacts on the module was made "dirty" with dirt on my hands, so plugged it out got, wipe with ethanol, I did that but not with ethanol, instead of it I used vodka same results didn't work. -I swapped RAM modules, didn't work. I don't remember exactly, but also with installation the GRUB menu broke, what I mean is I got black screen for 3 seconds (which I can configure from /etc/default/grub). The only thing where I kind of "broke the rule" is before buying it, I visited (google for Lenovo Lenovo Z710 compatible upgrades crucial) crucial website for my laptop (they have very fluent interface to choose the upgrades) and it says that the max RAM is 16 GB (8 + 8 ), I've ignored it. [update] Antz answer kind of solves my problem, but the real answer with code example was given on official Linux Mint Forum. sudo dmidecode gave huge output, but there it said about error that I had Handle 0x0005, DMI type 5, 24 bytes Memory Controller Information Error Detecting Method: None Error Correcting Capabilities: None Supported Interleave: One-way Interleave Current Interleave: One-way Interleave Maximum Memory Module Size: 8192 MB
In summary, one of two things generally happens. The memory works, but is limited to the maximum amount supported by the motherboard, or the memory doesn't work at all. Let me be a bit detail to you. On every motherboard, there is a controller for accessing the RAM. The limiting factor is how much memory can be accessed (or addressed) by that memory controller. Theoretically, a 64-bit CPU can access 2^64 bytes of RAM. For practical reasons, however, the number of addresses lines actually etched into a motherboard is much smaller, and the controller is created to be able to access up to a specific number of addresses. It can address fewer memory locations just fine as well. That determines the range and maximum amount of memory. So when memory is installed with more addressable bytes than the controller understands, the best outcome is that only the lower portion of the RAM is used. However, because of the way memory is constructed, it's also possible that the larger memory won't work at all as that's the case with yours. But again, it depends on the motherboard on how it handles memory errors. This stackexchange site gives more detailed information concerning your RAM issue. What happens when more RAM is installed than the motherboard supports? You can also read this: RAM.
Mint is not detecting new memory (RAM)
1,558,401,430,000
I am trying to run Docker but I need more memory on my Ubuntu 16.04 free -m total used free shared buff/cache available Mem: 7914 4024 3072 83 817 3448 Swap: 8127 14 8113 When I run docker Setting advertised host to 127.0.0.1. Operating system RAM available is 3344 MiB, which is less than the lowest recommended of 5120 MiB. Your system performance may be seriously impacted. What should I do to get more RAM?Is this possible?
Use the below command to identify top 10 Memory consuming resources. so that you can trouble accordingly ps axo %mem,command,pid| sort -nr | head To drop cache use below command sync;echo 1 > /proc/sys/vm/drop_caches
How to clear cache,swap and what are the limits?
1,558,401,430,000
I am experiencing a weird issue lately: Sometimes (I cannot reproduce it on purpose), my system is using all its swap, despite there being more than enough free RAM. If this happens, the systems then becomes unresponsive for a couple of minutes, then the OOM killer kills either a "random" process which does not help much, or the X server. If it kills a "random" process, the system does not become responsive (there is still no swap but much free RAM); if it kills X, the swap is freed and the system becomes responsive again. Output of free when it happens: $ free -htl total used free shared buff/cache available Mem: 7.6G 1.4G 60M 5.7G 6.1G 257M Low: 7.6G 7.5G 60M High: 0B 0B 0B Swap: 3.9G 3.9G 0B Total: 11G 5.4G 60M uname -a: Linux fedora 4.4.7-300.fc23.x86_64 #1 SMP Wed Apr 13 02:52:52 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Swapiness: cat /proc/sys/vm/swappiness 5 Relevant section in dmesg: http://pastebin.com/0P0TLfsC tmpfs: $ df -h -t tmpfs Filesystem Size Used Avail Use% Mounted on tmpfs 3.8G 1.5M 3.8G 1% /dev/shm tmpfs 3.8G 1.7M 3.8G 1% /run tmpfs 3.8G 0 3.8G 0% /sys/fs/cgroup tmpfs 3.8G 452K 3.8G 1% /tmp tmpfs 776M 16K 776M 1% /run/user/42 tmpfs 776M 32K 776M 1% /run/user/1000 Meminfo: http://pastebin.com/CRmitCiJ top -o SHR -n 1 Tasks: 231 total, 1 running, 230 sleeping, 0 stopped, 0 zombie %Cpu(s): 8.5 us, 3.0 sy, 0.3 ni, 86.9 id, 1.3 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem : 7943020 total, 485368 free, 971096 used, 6486556 buff/cache KiB Swap: 4095996 total, 1698992 free, 2397004 used. 989768 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2066 mkamlei+ 20 0 8342764 163908 145208 S 0.0 2.1 0:59.62 Xorg 2306 mkamlei+ 20 0 1892816 138536 27168 S 0.0 1.7 1:25.47 gnome-shell 3118 mkamlei+ 20 0 596392 21084 13152 S 0.0 0.3 0:04.86 gnome-terminal- 1646 gdm 20 0 1502632 60324 12976 S 0.0 0.8 0:01.91 gnome-shell 2269 mkamlei+ 20 0 1322592 22440 8124 S 0.0 0.3 0:00.87 gnome-settings- 486 root 20 0 47048 8352 7656 S 0.0 0.1 0:00.80 systemd-journal 2277 mkamlei+ 9 -11 570512 10080 6644 S 0.0 0.1 0:15.33 pulseaudio 2581 mkamlei+ 20 0 525424 19272 5796 S 0.0 0.2 0:00.37 redshift-gtk 1036 root 20 0 619016 9204 5408 S 0.0 0.1 0:01.70 NetworkManager 1599 gdm 20 0 1035672 11820 5120 S 0.0 0.1 0:00.28 gnome-settings- 2386 mkamlei+ 20 0 850856 24948 4944 S 0.0 0.3 0:05.84 goa-daemon 2597 mkamlei+ 20 0 1138200 13104 4596 S 0.0 0.2 0:00.28 evolution-alarm 2369 mkamlei+ 20 0 1133908 16472 4560 S 0.0 0.2 0:00.49 evolution-sourc 2529 mkamlei+ 20 0 780088 54080 4380 S 0.0 0.7 0:01.14 gnome-software 2821 mkamlei+ 20 0 1357820 44320 4308 S 0.0 0.6 0:00.23 evolution-calen 2588 mkamlei+ 20 0 1671848 55744 4300 S 0.0 0.7 0:00.49 evolution-calen 2525 mkamlei+ 20 0 613512 8928 4188 S 0.0 0.1 0:00.19 abrt-applet ipcs: [mkamleithner@fedora ~]$ ipcs -m -t ------ Shared Memory Attach/Detach/Change Times -------- shmid owner attached detached changed 294912 mkamleithn Apr 30 20:29:16 Not set Apr 30 20:29:16 393217 mkamleithn Apr 30 20:29:19 Apr 30 20:29:19 Apr 30 20:29:17 491522 mkamleithn Apr 30 20:42:21 Apr 30 20:42:21 Apr 30 20:29:18 524291 mkamleithn Apr 30 20:38:10 Apr 30 20:38:10 Apr 30 20:29:18 786436 mkamleithn Apr 30 20:38:12 Not set Apr 30 20:38:12 [mkamleithner@fedora ~]$ ipcs ------ Message Queues -------- key msqid owner perms used-bytes messages ------ Shared Memory Segments -------- key shmid owner perms bytes nattch status 0x00000000 294912 mkamleithn 600 524288 2 dest 0x00000000 393217 mkamleithn 600 2576 2 dest 0x00000000 491522 mkamleithn 600 4194304 2 dest 0x00000000 524291 mkamleithn 600 524288 2 dest 0x00000000 786436 mkamleithn 600 4194304 2 dest ------ Semaphore Arrays -------- key semid owner perms nsems [mkamleithner@fedora ~]$ ipcs -m -t ------ Shared Memory Attach/Detach/Change Times -------- shmid owner attached detached changed 294912 mkamleithn Apr 30 20:29:16 Not set Apr 30 20:29:16 393217 mkamleithn Apr 30 20:29:19 Apr 30 20:29:19 Apr 30 20:29:17 491522 mkamleithn Apr 30 20:42:21 Apr 30 20:42:21 Apr 30 20:29:18 524291 mkamleithn Apr 30 20:38:10 Apr 30 20:38:10 Apr 30 20:29:18 786436 mkamleithn Apr 30 20:38:12 Not set Apr 30 20:38:12 [mkamleithner@fedora ~]$ sudo grep 786436 /proc/*/maps /proc/2084/maps:7ff4a56cc000-7ff4a5acc000 rw-s 00000000 00:05 786436 /SYSV00000000 (deleted) /proc/3984/maps:7f4574d00000-7f4575100000 rw-s 00000000 00:05 786436 /SYSV00000000 (deleted) [mkamleithner@fedora ~]$ sudo grep 524291 /proc/*/maps /proc/2084/maps:7ff4a4593000-7ff4a4613000 rw-s 00000000 00:05 524291 /SYSV00000000 (deleted) /proc/2321/maps:7fa9b8a67000-7fa9b8ae7000 rw-s 00000000 00:05 524291 /SYSV00000000 (deleted) [mkamleithner@fedora ~]$ sudo grep 491522 /proc/*/maps /proc/2084/maps:7ff4a4ad3000-7ff4a4ed3000 rw-s 00000000 00:05 491522 /SYSV00000000 (deleted) /proc/2816/maps:7f2763ba1000-7f2763fa1000 rw-s 00000000 00:05 491522 /SYSV00000000 (deleted) [mkamleithner@fedora ~]$ sudo grep 393217 /proc/*/maps /proc/2084/maps:7ff4b1a60000-7ff4b1a61000 rw-s 00000000 00:05 393217 /SYSV00000000 (deleted) /proc/2631/maps:7fb89be79000-7fb89be7a000 rw-s 00000000 00:05 393217 /SYSV00000000 (deleted) [mkamleithner@fedora ~]$ sudo grep 294912 /proc/*/maps /proc/2084/maps:7ff4a5510000-7ff4a5590000 rw-s 00000000 00:05 294912 /SYSV00000000 (deleted) /proc/2582/maps:7f7902dd3000-7f7902e53000 rw-s 00000000 00:05 294912 /SYSV00000000 (deleted) getting the process names: [mkamleithner@fedora ~]$ ps aux | grep 2084 mkamlei+ 2084 5.1 2.0 8149580 159272 tty2 Sl+ 20:29 1:10 /usr/libexec/Xorg vt2 -displayfd 3 -auth /run/user/1000/gdm/Xauthority -nolisten tcp -background none -noreset -keeptty -verbose 3 mkamlei+ 5261 0.0 0.0 118476 2208 pts/0 S+ 20:52 0:00 grep --color=auto 2084 [mkamleithner@fedora ~]$ ps aux | grep 3984 mkamlei+ 3984 11.4 3.6 1355100 293240 tty2 Sl+ 20:38 1:38 /usr/lib64/firefox/firefox mkamlei+ 5297 0.0 0.0 118472 2232 pts/0 S+ 20:52 0:00 grep --color=auto 3984 Should I also post the results for the other shmids? I don't really know how to interpret the output. How can I fix this? Edit: Starting the game "Papers, Please" always seems to trigger this problem after some time. It also happens sometimes when this game is not started, though. Edit2: Seems to be an X issue. On wayland this does not happen. Might be due to custom settings in xorg.conf. Final Edit: For anyone experiencing the same problem: I was using DRI 2. Switching to DRI 3 also fixes the problem. this is my relevant section in the xorg.conf: Section "Device" Identifier "Intel Graphics" Driver "intel" Option "AccelMethod" "sna" # Option "Backlight" "intel_backlight" BusID "PCI:0:2:0" Option "DRI" "3" #here Option "TearFree" "true" EndSection The relevant file on my system is in /usr/share/X11/xorg.conf.d/ .
shared Memory used (mostly) by tmpfs (Shmem in /proc/meminfo, available on kernels 2.6.32, displayed as zero if not available)> So the manpage definition of Shared is not as helpful as it could be :(. If the tmpfs use does not reflect this high value of Shared, then the value must represent some process(es) "who did mmap() with MAP_SHARED|MAP_ANONYMOUS" (or System V shared memory). 6G of shared memory on an 8G system is still a lot. Seriously, you don't want that, at least not on a desktop. It's weird that it seems to contribute to "buff/cache" as well. But I did a quick test with python and that's just how it works. To show the processes with the most shared memory, use top -o SHR -n 1. System V shared memory Finally it's possible you have some horrible legacy software that uses system V shared memory segments. If they get leaked, they won't show up in top :(. You can list them with ipcs -m -t. Hopefully the most recently created one is still in use. Take the shmid number and e.g. $ ipcs -m -t ------ Shared Memory Attach/Detach/Change Times -------- shmid owner attached detached changed 3538944 alan Apr 30 20:35:15 Apr 30 20:35:15 Apr 30 16:07:41 3145729 alan Apr 30 20:35:15 Apr 30 20:35:15 Apr 30 15:04:09 4587522 alan Apr 30 20:37:38 Not set Apr 30 20:37:38 # sudo grep 4587522 /proc/*/maps -> then the numbers shown in the /proc paths are the pid of the processes that use the SHM. (So you could e.g. grep the output of ps for that pid number). Apparent contradictions Xorg has 8G mapped. Even though you don't have separate video card RAM. It only has 150M resident. It's not that the rest is swapped out, because you don't have enough swap space. The SHM segments shown by ipcs are all attached to two processes. So none of them have leaked, and they should all show up in the SHR column of top (double-counted even). It's ok if the number of pages used is less than the size of the memory segment, that just means there are pages that haven't been used. But free says we have 6GB of allocated shared memory to account for, and we can't find that.
Linux using whole swap, becoming unresponsive while there is plenty of free RAM
1,558,401,430,000
I'm trying to prove that an application I developed is saturating the memory bandwidth. For pure bandwidth benchmarking I'm aware that there's STREAM, but it only measures the maximum sequential burst bandwidth in terms of MB/s. I can see the memory transfers/second while using PCM, but I need an external application to push the RAM with small (bytes, not kilobytes) random reads and random writes, to prove that the bandwidth I see is the maximum possible for the system. Edit: I've clarified the question.
I've obtained the numbers that I need via SysBench, which can do memory benchmarks with random access and with small blocks down to 1KiB in size.
Flexible RAM Benchmarking Tool
1,558,401,430,000
When I download a lot of data (e.g. 3GB) in one go, using a program such as Transmission or Wget, progressively during the downloading, and after it has has finished, the computer always seems slightly sluggish, as though it's been using swap. However, the result of free always shows 0 bytes of swap used, both during the downloading and after. I typically notice that the program used to download is slower to close, and subsequent programs are slower to open, but only the first time they're opened, as though the data is being transfered from swap to RAM. My swap and free aren't faulty, since on other occasions free reports some swap usage as expected. My computer is never suspended nor hibernated; I don't use a screen saver; my computer is switched off at the end of every day. My computer has 4 GB of RAM, and a fast processor, which never goes above ~20% usage when/after I'm downloading. I'm using Linux. What could be the cause of this behaviour?
The RAM in a computer is useful for two things: to store the memory of programs, and as a cache of recently-used disk content. On a typical healthy desktop system, about half the memory goes into each. You can check your memory usage with the free command; the “used” column of the “-/+ buffers/cache” is the figure for memory used for program data, and the “buffers” and “cache” values are the disk cache. When you've been downloading a lot of things, this data fills the disk cache. As it does so, something else will have to go, because memory is finite. It appears that you aren't running any programs that have infrequently-used data, so no data gets written to the swap; instead, other data is evicted from the cache. Programs are slower to open the first time after the download because you're used to the speed when the program code and data is already in the cache, but now they've been displaced from the cache to make room for the downloaded files. The download program is probably slower to close because of delayed writes: the files that it writes are buffered, and the data is only fully written to disk when the system isn't using the disk bandwidth for more important things or when the buffer memory needs to be repurposed, or by explicit request with the sync command. That you aren't seeing any swap at all is a bit strange. It suggests that you've tuned your swappiness to a value that reduces performance (swap usage is healthy, but there's a lot of advice on the web that suggests turning it off, which is almost always counter-productive).
Why is my computer sluggish after downloading a lot?
1,558,401,430,000
I have 2 instances of vlc running. One is playing. One is paused (and mostly swapped out). top - 14:25:01 up 23 days, 19:19, 69 users, load average: 2.36, 2.61, 4.19 Tasks: 905 total, 3 running, 894 sleeping, 2 stopped, 6 zombie %Cpu(s): 11.9 us, 6.5 sy, 0.1 ni, 81.0 id, 0.4 wa, 0.0 hi, 0.0 si, 0.0 st GiB Mem : 31.2 total, 0.8 free, 27.4 used, 2.9 buff/cache GiB Swap: 158.3 total, 82.4 free, 75.8 used. 1.5 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 420221 tange 20 0 4066448 601160 28444 S 30.3 1.8 8:55.51 vlc <-- playing 1329863 tange 20 0 2640256 131980 42300 S 0.7 0.4 11:47.28 vlc <-- paused The video is 1280x720 px 30 fps, and when I force swapping out, only around 100 MB is swapped back in. Why are they taking up so massive amounts of memory? (600 MB for playing seems ridiculous) What can I change to lower this usage? Edit: I have investigated further. The numbers below are in kbytes and measured with 'time -v'. They agree with 'top'. These are resident and they are at their max when I close VLC (in other words: They do not temporarily spike shortly to find a lower level). "Playing" means play the full video. "Pausing" means play the first few seconds and then pausing until memory usage stabilizes. Here is the graph of "Playing 1280x720 with 2087 big videos in list" (measured with ps aux every second for 600 sec): Here is the graph of "Playing 1280x720 with 0 big videos in list" (measured with ps aux every second for 100 sec): This shows the usage of the "0 in list" is slightly overestimated: The RSS tops shortly after start and drops a little after 15 secs. VSize is pretty constantly 2.3GB bigger than RSS. Playing 640x360 with 5400 videos in list: Maximum resident set size (kbytes): 1096232 Pausing 640x360 with 5400 videos in list: Maximum resident set size (kbytes): 1101840 Playing 640x360 with 0 videos in list: Maximum resident set size (kbytes): 333228 Pausing 640x360 with 0 videos in list: Maximum resident set size (kbytes): 303792 Playing 1280x720 with 2087 big videos in list: Maximum resident set size (kbytes): 1273936 Pausing 1280x720 with 2087 big videos in list: Maximum resident set size (kbytes): 1190252 Playing 1280x720 with 0 videos in list: Maximum resident set size (kbytes): 185204 Pausing 1280x720 with 0 videos in list: Maximum resident set size (kbytes): 185352 This seems to indicate that the playlist has a huge impact on the RSS, whereas the resolution of the video does not. It is unclear why. VLC clearly caches the length of each video: their lengths slowly show up in the list, and this explains the slow increase in memory as shown in the graph. But the length ought to be only a few bytes: Whatever VLC is doing is takes up 150 kb-500 kb resident RAM per video in the list. I find 200 MB RSS to play a 1280x720 reasonable, but not adding 800 MB RSS just keep the playlist in RAM. Can I ask VLC not to cache that in RAM (and still keep my list)?
I found the culprit: Tools > Preferences > Show settings > All > Playlist > Automatically preparse items If this is on, VLC will read each file in the playlist and find the length of the video. Apparently it also reads much more. My problem disappears (and VLC stays below 200 MB) when disabling it and reappears when enabling it again. To me it looks like a bug in VLC: Why would keeping lengths of videos take up more than a few megs total?
VLC takes up 600 MB RAM. Why?
1,558,401,430,000
I have a nice laptop - 32 GB of RAM, M2 (SATA) and 2.5' SSD (also SATA) - dual boot, Fedora 33 & Windows 2019 Server. I ran dmidecode and found a Maximum Capacity of 64GB - but the manufacturer (ASUS) says 32GB is the max! Now, I know that dmidecode isn't perfect, but I want to hear from those who have upgraded their RAM based on dmidecode despite the manufacturer's recommendations? Quote from link above: Beware that DMI data have proven to be too unreliable to be blindly trusted. Dmidecode does not scan your hardware, it only reports what the BIOS told it to. I also found this, which doesn't inspire confidence, where it says: Aniruddh yes the H300's only support 32gb ram max (officially) its not the cpu support in this case its the mobo limited/locked support all bios are locked so unless its a modded bios (which i strongly dont recommend doing neither its allowed to discuss anything about it in this community ) probably it wont support so until someone buys 32gb sodimms and test them theres no way to know if its will support or not and i doubt anyone would take such a risk on a such high priced ram without having sure if it would really work or not but anyway of you are willing to go for it also why would u need 64gb 32 its already too much no one will ever use them in full and its not having 64gb that would make the laptop faster in some rare cases too much unused ram could also cause some bottleneck and decrease the performance but good luck :) So, it appears even if the mobo specs allow a certain amount of RAM, the manufacturer may or may not have reduced this capacity from within the BIOS? Have I grasped the picture - am I taking a big chance going by dmidecode or should I persevere? Any references/URLs, tips - anything appreciated!
Ecample: 4 socket supermicro server with 512GB of installed RAM, which is done via 32 x 16gb DIMMS. dmidecode | grep "Maximum Capacity" Maximum Capacity: 384 GB Maximum Capacity: 384 GB Maximum Capacity: 384 GB Maximum Capacity: 384 GB Maximum Capacity: 384 GB Maximum Capacity: 384 GB Maximum Capacity: 384 GB Maximum Capacity: 384 GB also reported from dmidecode for me: Locator: P1-DIMMA1 Bank Locator: NODE 1 Locator: P1-DIMMA2 Bank Locator: NODE 1 Locator: P1-DIMMA3 Bank Locator: NODE 1 Locator: P1-DIMMB1 Bank Locator: NODE 1 Locator: P1-DIMMB2 Bank Locator: NODE 1 Locator: P1-DIMMB3 Bank Locator: NODE 1 Locator: P1-DIMMC1 Bank Locator: NODE 2 Locator: P1-DIMMC2 Bank Locator: NODE 2 Locator: P1-DIMMC3 Bank Locator: NODE 2 Locator: P1-DIMMD1 Bank Locator: NODE 2 Locator: P1-DIMMD2 Bank Locator: NODE 2 Locator: P1-DIMMD3 Bank Locator: NODE 2 Locator: P2-DIMMA1 Bank Locator: NODE 3 Locator: P2-DIMMA2 Bank Locator: NODE 3 Locator: P2-DIMMA3 Bank Locator: NODE 3 Locator: P2-DIMMB1 Bank Locator: NODE 3 Locator: P2-DIMMB2 Bank Locator: NODE 3 Locator: P2-DIMMB3 Bank Locator: NODE 3 Locator: P2-DIMMC1 Bank Locator: NODE 4 Locator: P2-DIMMC2 Bank Locator: NODE 4 Locator: P2-DIMMC3 Bank Locator: NODE 4 Locator: P2-DIMMD1 Bank Locator: NODE 4 Locator: P2-DIMMD2 Bank Locator: NODE 4 Locator: P2-DIMMD3 Bank Locator: NODE 4 Locator: P3-DIMMA1 Bank Locator: NODE 5 Locator: P3-DIMMA2 Bank Locator: NODE 5 Locator: P3-DIMMA3 Bank Locator: NODE 5 Locator: P3-DIMMB1 Bank Locator: NODE 5 Locator: P3-DIMMB2 Bank Locator: NODE 5 Locator: P3-DIMMB3 Bank Locator: NODE 5 Locator: P3-DIMMC1 Bank Locator: NODE 6 Locator: P3-DIMMC2 Bank Locator: NODE 6 Locator: P3-DIMMC3 Bank Locator: NODE 6 Locator: P3-DIMMD1 Bank Locator: NODE 6 Locator: P3-DIMMD2 Bank Locator: NODE 6 Locator: P3-DIMMD3 Bank Locator: NODE 6 Locator: P4-DIMMA1 Bank Locator: NODE 7 Locator: P4-DIMMA2 Bank Locator: NODE 7 Locator: P4-DIMMA3 Bank Locator: NODE 7 Locator: P4-DIMMB1 Bank Locator: NODE 7 Locator: P4-DIMMB2 Bank Locator: NODE 7 Locator: P4-DIMMB3 Bank Locator: NODE 7 Locator: P4-DIMMC1 Bank Locator: NODE 8 Locator: P4-DIMMC2 Bank Locator: NODE 8 Locator: P4-DIMMC3 Bank Locator: NODE 8 Locator: P4-DIMMD1 Bank Locator: NODE 8 Locator: P4-DIMMD2 Bank Locator: NODE 8 Locator: P4-DIMMD3 Bank Locator: NODE 8 i believe my server is quad channel ram and with 4 cpus the reason for all of everything shown above. However note the "max capacity" of 384gb reported, how/where that it listed {which I did not indicate above} each is for a Physical Memory Array. It gets complicated and you have to dive into the memory channel specifics to get an accurate understanding... but my server does not have a max capacity of 384gb ram nor does it have a max capacity of 3072 gb. I believe based the true max ram available to the operating system is based on cpu/memory architecture; for me is 768gb and I think under certain circumstances (for other servers) can be 1.5tb if using certain low voltage DIMMS and the [server] BIOS supports it. Note however the that does not correspond to the reported number of 384, which are all listed under Physical Memory Array in my case. So it is a matter of interpretation, or misinterpretation rather. This Maximum Capacity does not refer to the maximum amount of usable RAM seen by the operating system, it is reporting a low level memory channel interface. So while i don't doubt dmidecode is not 100% reliable regarding every bit of hardware it is interfacing, you have to really dive in and understanding what values it is trying to report on. Your laptop being 1 cpu and I suppose 2 memory channels vs a 4 socket server like I listed above... obvious differences aside based on what you said for your specific laptop I suspect you are seeing a max capacity of 64gb at the hardware manufacturer memory channel level but in the end that Asus laptop it will be that Asus bios code being what's really in charge. If Asus says only supports 32gb I would believe that because it's that bios code making that happen not the hardware memory channel capability of 64gb being reported by dmidecode. It would be getting down to computer engineering level- i suspect ASUS (and everyone else) use all the same memory channel type hardware which is capable of referencing a 64gb DIMM but there's other hardware in play which makes a 32gb limit actually happen. I wouldn't necessarily go thinking ASUS simply programmed BIOS code to simply limit the laptop to 32gb when it really could have 64gb. You also mentioned H300, which is an intel chipset... recognize any consumer pc motherboard like that has only 2 DIMM slots vs 4 DIMM slots of a higher end Z370 chipset for example... all those 2 DIMM motherboards are all limited to 32gb total RAM vs 64gb total ram of the 4 DIMM motherboards. So I would not try to put 64gb of ram in your laptop it's not going to work. I can also tell you fwiw I've tried installing Windows 7 pro on my 512gb server and only 192gb shows up as usable in Windows. why would u need 64gb 32 its already too much no one will ever use them in full and its not having 64gb that would make the laptop faster in some rare cases too much unused ram could also cause some bottleneck and decrease the performance but good luck :) unused ram does not cause bottleneck or decrease performance. That answer is out of context and neglects a lot of low level architecture and memory channel layout which is the reason why/how ram quantities can affect performance [at the hardware level]. And luck has nothing to do with it.
Maximum RAM - do I listen to dmidecode or the manufacturer?
1,431,696,284,000
I have a computer with a Windows 7 and a Debian OS disk partition. The computer has 12GB ram as can be seen when logged in on the Windows 7 OS. However, the Debian partition is only recognizing just under 4GB ram. Why would this be, and how can I fix this? When I run the "free" command I see the reduced RAM amount as well as when I try to create a virtual machine in Virtual Box with the maximum RAM being allowed for a VM topping off at that same low amount. As far as I understood, (which is not saying much..) OS partitions were only disk partitions not RAM partitions. Edit: Running Debian 6 "Squeeze" 32 bit Output of 'free' command total used free .... Mem: 3619800 386568 3233232 ... -/+ buffers/cache: 66944 3552856 Swap: 497972 0 497972 I don't have Gnome installed so I'm not really sure how to take a screenshot. But in VirtualBox you have a setting for RAM allocation for VMs and on this Linux partition the option maxes out at 3584MB.
4GB of memory requires 32 bits to store addresses. Most 32-bit processor architectures can only address 4GB of memory, and older x86 CPUs are no exception. More recent 32-bit x86 CPUs can access more than 4GB of physical memory through a processor feature called PAE.¹ 64-bit x86 CPUs always have PAE. PAE requires a Linux kernel compilation option. Without this option, the kernel can only address 4GB of RAM, and some of that is lost because it's used by peripherals such as the graphics card. But with this option, the kernel won't work on processors that don't have the PAE feature. Debian's default kernel is compatible with most x86 processors but can't make use of some features of recent(-ish) processors such as PAE. To use more than 4GB of RAM, install a PAE-enabled kernel and reboot into it. On Debian squeeze, you need the linux-image-2.6-686-bigmem package. If you have a 64-bit CPU, you can instead install a 64-bit kernel: linux-image-2.6-amd64. With a 64-bit kernel, you can make use of more than 4GB of RAM, and you can run both 32-bit and 64-bit applications. Or you can install a whole 64-bit distribution (the amd64 architecture). To find out whether your processor is a 64-bit one, run grep -w lm /proc/cpuinfo — if a line with flags : … lm … appears, you have a 64-bit CPU. Note that on a 32-bit system, the size of virtual memory is still limited to 4GB. On Linux, that's split with 1–3GB for the kernel and 1–3GB for the process. This is the limit of addressable memory in a process; a 32-bit system can make use of more than 4GB of RAM because each process can use up to 1–3GB of that RAM. So if you want to run a VirtualBox VM with more than 3GB of RAM, you'll need to install a 64-bit distribution.
All of system ram not available on Debian OS partition
1,431,696,284,000
I read from here that I could load file into RAM for faster accessing using the below command. cat filename > /dev/null However, I wanted to test if the above statement is really true. So, I did the below testing. Create a 2.5 GB test file as below. dd if=/dev/zero of=demo.txt bs=100M count=10 Now, calculated the file access time as below. mytime="$(time ( cat demo.txt ) 2>&1 1>/dev/null )" echo $mytime real 0m19.191s user 0m0.007s sys 0m1.295s As per the command suggests, now I needed to add the file to cache memory. So, I did, cat demo.txt > /dev/null Now, I assume the file is loaded into the cache. So I calculate the time to load the file again. This is the value I get. mytime="$(time ( cat demo.txt ) 2>&1 1>/dev/null )" echo $mytime real 0m18.701s user 0m0.010s sys 0m1.275s I repeated step 4 for 5 more iterations to calculate the time and these are the values I got. real 0m18.574s user 0m0.007s sys 0m1.279s real 0m18.584s user 0m0.012s sys 0m1.267s real 0m19.017s user 0m0.009s sys 0m1.268s real 0m18.533s user 0m0.012s sys 0m1.263s real 0m18.757s user 0m0.005s sys 0m1.274s So my question is, why the time varies even when the file is loaded into the cache? I was expecting since the file is loaded into the cache, the time should come down in each iteration but that doesn't seem to be the case.
Nope nope nope! This is not how it is done. Linux (the kernel) can choose to put some files in the cache and to remove them whenever it wants. You really can't be sure that anything is in the cache or not. And this command won't change that (a lot). The advice in the link you provided is so wrong in so many ways! The cache is an OS thing. You don't need to cat the file to /dev/null to take advantage of this. This is actually a very stupid thing to do because you are forcing Linux to read the file one extra time. For instance, if you plan to read one file 4 times. If you don't care about it, the first reading will be quite slow, the 3 subsequent ones should be faster (because of caching). If you are using this "trick", the first reading will be quite slow, all the 4 subsequent ones should be faster (but not null). Just let Linux handle it. This command is only useful if you want to make sure that Linux keep it in RAM. So you have to perform it often when your system is idle. However, as I said, this is also stupid because you can never be sure that Linux actually cached the file in RAM and even if it did, you would spend time to read it in RAM or on disk (if it was not cached or already removed from the cache). By doing this repetitively on a big file, you basically trick Linux into thinking that this file should be in RAM at the expense of other files that you actually use more often. So the conclusion here: don't do this kind of tricks, this is usually counterproductive. However, if you know that some small files (compared to your RAM size) would really benefit from being accessed from RAM, you can use a tmpfs mount and store your file there. On modern distribs, the /tmp folder is usually a tmpfs one. Another alternative that I personally found worthy is to compress your file at the FS level with BTRFS for instance or manually (but this one requires that the program that access the file has the ability to decompress it). Of course, your files should benefit from compression, otherwise this is useless. This way, you could be much more confident that Linux keeps your compressed file in RAM (since it's smaller) and if your application is IO bound, loading 100MB from disk instead of loading 10GB should be much faster.
file access time after loading file into the cache
1,431,696,284,000
When I check my CPU Cache with command dmidecode, I get Cache configuration to be Not Socketed. What does that mean? prayag@prayag:~/hacker_/draobkcalb$ sudo dmidecode -t cache # dmidecode 2.11 SMBIOS 2.5 present. Handle 0x000A, DMI type 7, 19 bytes Cache Information Socket Designation: Internal Cache Configuration: Enabled, Not Socketed, Level 1 Operational Mode: Write Back Location: Internal Installed Size: 32 kB Maximum Size: 32 kB Supported SRAM Types: Synchronous Installed SRAM Type: Synchronous Speed: Unknown Error Correction Type: Unknown System Type: Unknown Associativity: Unknown Handle 0x000B, DMI type 7, 19 bytes Cache Information Socket Designation: External Cache Configuration: Enabled, Not Socketed, Level 2 Operational Mode: Write Back Location: External Installed Size: 2048 kB Maximum Size: 2048 kB Supported SRAM Types: Synchronous Installed SRAM Type: Synchronous Speed: Unknown Error Correction Type: Unknown System Type: Unknown Associativity: Unknown
According to the relevant dmidecode source code, the information presented by the program comes from the DTMF SMBIOS documentation that you can find here. On page 59 of the 2.8.0 version of the SMBIOS spec, the reference to the bits, tested for by dmidecode, is given, but without a clear definition of what 'socketed' means (at least not in any of the preceding pages). For normal memory and CPUs 'socket' is used in that document as a physical place an item can be inserted. A socket might be available and/or populated. From this I think you can safely assume that 'Not socketed' means the Level 1 and 2 caches on your machine do not have have a separate physical socket. For modern processors—with caches at the speed they are—a cache external to the CPU chip ('socketed' on their own) would probably not be able to run at competitive speeds. But I remember this was not always the case and that installing CPU cache memory was optional.
External Cache not socketed
1,431,696,284,000
I am running unzip to unzip huge files. However, my cpu usage is under 15 percent and ram is only utilizing 1-1.2 GB out of 8 GB. Is there a way to allocate more cpu power and ram to this unzip program? Thank you. I am on Lubuntu 16.04
Programs take all the memory and CPU power they can get, unless they have built-in limitations. unzip has no such built-in limitations. You could give it less, but you can't give it more, because by default it's allowed to take as much as it wants. Unzipping is not a memory-intensive process. The main memory cost of unzipping a huge archive is that unzip keeps the list of files in memory. The limiting factor for speed may be CPU power or disk (or the network if you're reading or writing a file to the network). It depends how fast your disk is relative to your CPU. Check whether the process is taking 100% of one core. If it isn't, then the only way to speed it up would be to speed up the input/output. This can mean a faster disk, or arranging to put the input and the output on separate disks. If the process is taking 100% of one core, then you can speed it up by parallelizing. For many compression formats, decompression of one file is inherently non-parallelizable, because the format is very adaptative: compression is achieved by looking for repeated patterns and replacing them by some indirect reference to a previous pattern. Some compression formats have “reinitialization points” that allow decompressing each block independently; I know this is as least the case for bzip2. Some compression tools do this even if the format doesn't require it. But as far as I know, this is not the case for zip. On the other hand, zip compresses each member of an archive separately, so it's possible to decompress each file independently. Thus, if you have n cores, you can keep all of them busy decompressing separate files (if your I/O is up to speed). The problem is then to find a parallel unzip implementation. I think p7zip supports it with 7z x -mmt=on foo.zip or 7z x -mmt=8 (to use 8 cores), but p7zip's documentation is not very good and I haven't confirmed that this does parallelize.
Allocate more memory and cpu resources to a program
1,431,696,284,000
I used to have two RAM sticks, of 8GB each. I switched one of them for a 16G stick, and expected I would now have a total of 24GB but I have 20GB instead. The result of free -h: total used free shared buff/cache available Mem: 19Gi 2.8Gi 12Gi 105Mi 3.9Gi 16Gi Swap: 2.0Gi 0B 2.0Gi cat /proc/meminfo MemTotal: 20292048 kB MemFree: 13254056 kB MemAvailable: 16923208 kB Buffers: 269448 kB Cached: 3706108 kB SwapCached: 0 kB Active: 1319968 kB Inactive: 4874796 kB Active(anon): 14752 kB Inactive(anon): 2312796 kB Active(file): 1305216 kB Inactive(file): 2562000 kB Unevictable: 132 kB Mlocked: 132 kB SwapTotal: 2097148 kB SwapFree: 2097148 kB Dirty: 524 kB Writeback: 0 kB AnonPages: 2219512 kB Mapped: 1341800 kB Shmem: 108248 kB KReclaimable: 159592 kB Slab: 372516 kB SReclaimable: 159592 kB SUnreclaim: 212924 kB KernelStack: 23104 kB PageTables: 56624 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 12243172 kB Committed_AS: 13539940 kB VmallocTotal: 34359738367 kB VmallocUsed: 67940 kB VmallocChunk: 0 kB Percpu: 25088 kB HardwareCorrupted: 0 kB AnonHugePages: 0 kB ShmemHugePages: 0 kB ShmemPmdMapped: 0 kB FileHugePages: 0 kB FilePmdMapped: 0 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB Hugetlb: 0 kB DirectMap4k: 635844 kB DirectMap2M: 10717184 kB sudo dmidecode -t memory Getting SMBIOS data from sysfs. SMBIOS 3.3.0 present. # SMBIOS implementations newer than version 3.2.0 are not # fully supported by this version of dmidecode. Handle 0x0022, DMI type 16, 23 bytes Physical Memory Array Location: System Board Or Motherboard Use: System Memory Error Correction Type: None Maximum Capacity: 64 GB Error Information Handle: 0x0025 Number Of Devices: 2 Handle 0x0023, DMI type 17, 92 bytes Memory Device Array Handle: 0x0022 Error Information Handle: 0x0026 Total Width: 64 bits Data Width: 64 bits Size: 16384 MB Form Factor: SODIMM Set: None Locator: DIMM 0 Bank Locator: P0 CHANNEL A Type: DDR4 Type Detail: Synchronous Unbuffered (Unregistered) Speed: 3200 MT/s Manufacturer: Unknown Serial Number: E81F0ECB Asset Tag: Not Specified Part Number: CT16G4SFRA32A.M16FR Rank: 2 Configured Memory Speed: 3200 MT/s Minimum Voltage: 1.2 V Maximum Voltage: 1.2 V Configured Voltage: 1.2 V Memory Technology: DRAM Memory Operating Mode Capability: Volatile memory Firmware Version: Unknown Module Manufacturer ID: Bank 6, Hex 0x9B Module Product ID: Unknown Memory Subsystem Controller Manufacturer ID: Unknown Memory Subsystem Controller Product ID: Unknown Non-Volatile Size: None Volatile Size: 16 GB Cache Size: None Logical Size: None Handle 0x0024, DMI type 17, 92 bytes Memory Device Array Handle: 0x0022 Error Information Handle: 0x0027 Total Width: 64 bits Data Width: 64 bits Size: 8192 MB Form Factor: SODIMM Set: None Locator: DIMM 0 Bank Locator: P0 CHANNEL B Type: DDR4 Type Detail: Synchronous Unbuffered (Unregistered) Speed: 3200 MT/s Manufacturer: Samsung Serial Number: 00000000 Asset Tag: Not Specified Part Number: M471A1G44AB0-CWE Rank: 1 Configured Memory Speed: 3200 MT/s Minimum Voltage: 1.2 V Maximum Voltage: 1.2 V Configured Voltage: 1.2 V Memory Technology: DRAM Memory Operating Mode Capability: Volatile memory Firmware Version: Unknown Module Manufacturer ID: Bank 1, Hex 0xCE Module Product ID: Unknown Memory Subsystem Controller Manufacturer ID: Unknown Memory Subsystem Controller Product ID: Unknown Non-Volatile Size: None Volatile Size: 8 GB Cache Size: None Logical Size: None Linux: Ubuntu 20.04.6 LTS Is there something I can do to see all of my RAM or is this a hardware issue? UPD journalctl -b0 -k logs: BIOS-provided physical RAM map: BIOS-e820: [mem 0x0000000000000000-0x000000000009efff] usable BIOS-e820: [mem 0x000000000009f000-0x00000000000bffff] reserved BIOS-e820: [mem 0x0000000000100000-0x0000000009efffff] usable BIOS-e820: [mem 0x0000000009f00000-0x0000000009f0efff] ACPI NVS BIOS-e820: [mem 0x0000000009f0f000-0x00000000b89e8fff] usable BIOS-e820: [mem 0x00000000b89e9000-0x00000000babe8fff] reserved BIOS-e820: [mem 0x00000000babe9000-0x00000000c8dfefff] usable BIOS-e820: [mem 0x00000000c8dff000-0x00000000cbdfefff] reserved BIOS-e820: [mem 0x00000000cbdff000-0x00000000cdf7efff] ACPI NVS BIOS-e820: [mem 0x00000000cdf7f000-0x00000000cdffefff] ACPI data BIOS-e820: [mem 0x00000000cdfff000-0x00000000cdffffff] usable BIOS-e820: [mem 0x00000000ce000000-0x00000000cfffffff] reserved BIOS-e820: [mem 0x00000000f8000000-0x00000000fbffffff] reserved BIOS-e820: [mem 0x00000000fdc00000-0x00000000fdcfffff] reserved BIOS-e820: [mem 0x00000000fe000000-0x00000000fe0fffff] reserved BIOS-e820: [mem 0x00000000fec00000-0x00000000fec01fff] reserved BIOS-e820: [mem 0x00000000fec10000-0x00000000fec10fff] reserved BIOS-e820: [mem 0x00000000fec20000-0x00000000fec20fff] reserved BIOS-e820: [mem 0x00000000fed80000-0x00000000fed81fff] reserved BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved BIOS-e820: [mem 0x0000000100000000-0x000000052e2fffff] usable BIOS-e820: [mem 0x000000052e300000-0x000000062fffffff] reserved extended physical RAM map: reserve setup_data: [mem 0x0000000000000000-0x000000000009efff] usable reserve setup_data: [mem 0x000000000009f000-0x00000000000bffff] reserved reserve setup_data: [mem 0x0000000000100000-0x0000000009efffff] usable reserve setup_data: [mem 0x0000000009f00000-0x0000000009f0efff] ACPI NVS reserve setup_data: [mem 0x0000000009f0f000-0x00000000adb86017] usable reserve setup_data: [mem 0x00000000adb86018-0x00000000adb93857] usable reserve setup_data: [mem 0x00000000adb93858-0x00000000b2f69017] usable reserve setup_data: [mem 0x00000000b2f69018-0x00000000b2f77057] usable reserve setup_data: [mem 0x00000000b2f77058-0x00000000b89e8fff] usable reserve setup_data: [mem 0x00000000b89e9000-0x00000000babe8fff] reserved reserve setup_data: [mem 0x00000000babe9000-0x00000000c8dfefff] usable reserve setup_data: [mem 0x00000000c8dff000-0x00000000cbdfefff] reserved reserve setup_data: [mem 0x00000000cbdff000-0x00000000cdf7efff] ACPI NVS reserve setup_data: [mem 0x00000000cdf7f000-0x00000000cdffefff] ACPI data reserve setup_data: [mem 0x00000000cdfff000-0x00000000cdffffff] usable reserve setup_data: [mem 0x00000000ce000000-0x00000000cfffffff] reserved reserve setup_data: [mem 0x00000000f8000000-0x00000000fbffffff] reserved reserve setup_data: [mem 0x00000000fdc00000-0x00000000fdcfffff] reserved reserve setup_data: [mem 0x00000000fe000000-0x00000000fe0fffff] reserved reserve setup_data: [mem 0x00000000fec00000-0x00000000fec01fff] reserved reserve setup_data: [mem 0x00000000fec10000-0x00000000fec10fff] reserved reserve setup_data: [mem 0x00000000fec20000-0x00000000fec20fff] reserved reserve setup_data: [mem 0x00000000fed80000-0x00000000fed81fff] reserved reserve setup_data: [mem 0x00000000fee00000-0x00000000fee00fff] reserved reserve setup_data: [mem 0x00000000ff000000-0x00000000ffffffff] reserved reserve setup_data: [mem 0x0000000100000000-0x000000052e2fffff] usable reserve setup_data: [mem 0x000000052e300000-0x000000062fffffff] reserved e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] e820: reserve RAM buffer [mem 0x09f00000-0x0bffffff] e820: reserve RAM buffer [mem 0xadb86018-0xafffffff] e820: reserve RAM buffer [mem 0xb2f69018-0xb3ffffff] e820: reserve RAM buffer [mem 0xb321f000-0xb3ffffff] e820: reserve RAM buffer [mem 0xb3351000-0xb3ffffff] e820: reserve RAM buffer [mem 0xb89e9000-0xbbffffff] e820: reserve RAM buffer [mem 0xc8dff000-0xcbffffff] e820: reserve RAM buffer [mem 0xce000000-0xcfffffff] e820: reserve RAM buffer [mem 0x52e300000-0x52fffffff] [drm] vm size is 262144 GB, 4 levels, block size is 9-bit, fragment size is 9-bit amdgpu 0000:05:00.0: amdgpu: VRAM: 4096M 0x000000F400000000 - 0x000000F4FFFFFFFF (4096M used) amdgpu 0000:05:00.0: amdgpu: GART: 1024M 0x0000000000000000 - 0x000000003FFFFFFF amdgpu 0000:05:00.0: amdgpu: AGP: 267419648M 0x000000F800000000 - 0x0000FFFFFFFFFFFF [drm] Detected VRAM RAM=4096M, BAR=4096M [drm] RAM width 128bits DDR4 [drm] amdgpu: 4096M of VRAM memory ready [drm] amdgpu: 4096M of GTT memory ready. [drm] GART: num cpu pages 262144, num gpu pages 262144 [drm] PCIE GART of 1024M enabled. [drm] PTB located at 0x000000F400900000
According to your posted logs and described behavior, I believe your AMD Ryzen 7 5700U is also an integrated graphics card which does not have its own RAM. There may be an option in your BIOS to adjust the integrated graphics VRAM which will take RAM away from your installed system RAM. Your logs currently show that 4096M is allocated to your integrated graphics VRAM, so you could expect that amount of RAM to not be available to your system.
Linux see 20GB ram instead of 24 GB
1,431,696,284,000
Recently, I dumped my memory strings (just because I could) using sudo cat /dev/mem | strings. Upon reviewing this dump, I noticed some very interesting things: .symtab .strtab .shstrtab .note.gnu.build-id .rela.text .rela.init.text .rela.text.unlikely .rela.exit.text .rela__ksymtab .rela__ksymtab_gpl .rela__kcrctab .rela__kcrctab_gpl .rela.rodata .rodata.str1.8 .rela__mcount_loc .rodata.str1.1 .rela__bug_table .rela.smp_locks .modinfo __ksymtab_strings .rela__tracepoints_ptrs __tracepoints_strings __versions .rela.data .data.unlikely .rela__verbose .rela__jump_table .rela_ftrace_events .rela.ref.data .rela__tracepoints .rela.gnu.linkonce.t6 These lines all seem to be related in some way: they are all (very) near each other in the memory, they all have similar .<name> prefixes, and they all seem to refer to each other. What would cause these strings to appear, and why?
These look very much like section names from the Linux kernel. The ones prefixed by .rela contain relocation information for the named section, e.g. .rela.text is the relocation information for the text section (where kernel object code is stored). Other sections of interest are: .modinfo - kernel module information .rela.__ksymtab - kernel symbol table relocation table .rela.data - kernel data section relocation table rodata.str1.1 - read only data section for strings etcetera. Running strings on /dev/mem will just find interesting strings in the systems physical memory; hence you managed to find some strings that are in the uncompressed vmlinuz linux kernel.
What are these memory strings? What do they do? [duplicate]
1,431,696,284,000
I have an Intel 11700 with 4*32 GB RAM. When 4 physical RAM slots are filled, BIOS, htop, sudo lshw, sudo dmidecode dmesg (whatever command I use to display the total RAM on the system) all display that I have 128 GB RAM. However, I can only use 57.2 GB, i.e. approximately half of the available RAM. I tested this by using malloc() in C and creating files in tmpfs. The former finally returns a NULL pointer, and the latter finally displays that the device is out of space. More strangely, if I only install one or two RAM modules, i.e. 32 or 64 GB, I still can only use about half of the RAM, i.e. 12.1 or 28.7 GB.
Having malloc fail, especially at 50% occupation, is a symptom of strict allocations, i.e. disabled overcommit. This is controlled by the vm.overcommit_memory sysctl, and can be seen with sysctl vm.overcommit_memory If that indicates 2, the kernel will prevent overcommits, so heap resizes, mmaps etc. will fail when allocating (rather than when the memory is actually used). The limit is set to swap plus vm.overcommit-kbytes or vm.overcommit-ratio (as a percentage of physical memory). To get the behaviour you’re expecting, set vm.overcommit_memory to 0.
Why I can only access half of RAM whatever the total is?
1,431,696,284,000
I have 2x 4GB(8GB) RAM installed on my motherboard and BIOS/UEFI can confirm it, but on Ubuntu 14.04 64bit only has 3424776kB or 3.266120911GB. uname -a returns: 3.13.0-36-generic #63-Ubuntu SMP Wed Sep 3 21:30:07 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Through search, someone said about memory remapping but I can't find that on my Gigabyte F2A55M-DS2, I think that means it is on by default. $ free -g total used free shared buffers cached Mem: 3 1 1 0 0 0 -/+ buffers/cache: 1 1 Swap: 3 0 3` $file /sbin/init /sbin/init: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.24, BuildID[sha1]=7d9cc5d4d6cb68aede9400492a7c5942c55c7598, stripped`
Looks like the issue is related to the updates and broken mirrors. Changed the mirror I'm using to a different one, updates were successful. After a reboot, performance became smooth and when I checked the RAM it already has 7.2GB(looks like AMD APU uses RAM too).
Using only 3.3GB but I have 8GB RAM even on Ubuntu 14.04 64bit