date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,465,441,289,000
In a common Linux distribution, do utilities like rm, mv, ls, grep, wc, etc. run in parallel on their arguments? In other words, if I grep a huge file on a 32-threaded CPU, will it go faster than on dual-core CPU?
You can get a first impression by checking whether the utility is linked with the pthread library. Any dynamically linked program that uses OS threads should use the pthread library. ldd /bin/grep | grep -F libpthread.so So for example on Ubuntu: for x in $(dpkg -L coreutils grep findutils util-linux | grep /bin/); d...
Are basic POSIX utilities parallelized?
1,465,441,289,000
For example, on OSX, it's even less than 512k. Is there any recommended size, having in mind, that the app does not use recursion and does not allocate a lot of stack variables? I know the question is too broad and it highly depends on the usage, but still wanted to ask, as I was wondering if there's some hidden/inte...
As others have said, and as is mentioned in the link you provide in your question, having an 8MiB stack doesn’t hurt anything (apart from consuming address space — on a 64-bit system that won’t matter). Linux has used 8MiB stacks for a very long time; the change was introduced in version 1.3.7 of the kernel, in July 1...
Why on modern Linux, the default stack size is so huge - 8MB (even 10 on some distributions)
1,465,441,289,000
Is there any way to have make use multi-threading (6 threads is ideal on my system) system-wide, instead of by just adding -j6 to the command line? So, that if I run make, it acts the same as if I was running make -j6? I want this functionality because I install a lot of packages from the AUR using pacaur (I'm on Arc...
(pacaur uses makepkg, see https://wiki.archlinux.org/index.php/Makepkg ) In /etc/makepkg.conf add MAKEFLAGS="-j$(expr $(nproc) \+ 1)" to run #cores + 1 compiling jobs concurrently. When using bash you can also add export MAKEFLAGS="-j$(expr $(nproc) \+ 1)" to your ~/.bashrc to make this default for all make commands,...
Use multi-threaded make by default?
1,465,441,289,000
When I run top -H, I see that my multiple mysql threads all have the same PID. However, in ps -eLf I see each one has a different PID: ps -eLf UID PID PPID LWP C NLWP STIME TTY TIME CMD mysql 1424 1 1424 0 17 18:41 ? 00:00:00 /usr/sbin/mysqld mysql 1424 1 1481 0 17 18...
They are actually showing the same information in different ways. This is what the -f and -L options to ps do (from man ps, emphasis mine): -f               Do full-format listing. This option can be combined with many other UNIX-style options to add additional columns. It also causes the command arguments to...
Why do top and ps show different PIDs for the same processes?
1,465,441,289,000
I'm using htop and looking at a process (rg) which launched multiple threads to search for text in files, here's the tree view in htop: PID Command 1019 |- rg 'search this' 1021 |- rg 'search this' 1022 |- rg 'search this' 1023 |- rg 'search this' Why am I seeing PIDs for the process' threads? I though...
In Linux, each thread has a pid, and that’s what htop shows. The “process” to which all the threads belong is the thread whose pid matches its thread group id. In your case, grep Tgid /proc/1021/status would show the value 1019 (and this would be true for all the rg identifiers shown by htop). See Are threads implemen...
Why do threads have their own PID?
1,465,441,289,000
I know how to gunzip a file to a selected location. But when it comes to utilizing all CPU power, many consider pigz instead of gzip. So, the question is how do I unpigz (and untar) a *.tar.gz file to a specific directory?
I found three solutions: With GNU tar, using the awesome -I option: tar -I pigz -xvf /path/to/archive.tar.gz -C /where/to/unpack/it/ With a lot of Linux piping (a "geek way"): unpigz < /path/to/archive.tar.gz | tar -xvC /where/to/unpack/it/ More portable (to other tar implementations): unpigz < /path/to/archive.t...
unpigz (and untar) to a specific directory
1,465,441,289,000
I have a HUGE (and I mean huge) text file that I am going to process with vim. I could process it using two different (debian) machines. One is duel-core and one is octo-core. A single core on my duel-core box a faster than a single core on my octo-core box. Does 'vim' utilize multithreading in such a way as to make...
No, vim is not multithreaded. Multiple cores won't help you here. First we have to agree on what a huge file is. I suppose you mean a file larger than the RAM size. Vim was not designed for large files. Furthermore, when not sufficient line ends are present, vim might not be able to open the file at all. Decide if you...
Is vim multithreaded?
1,465,441,289,000
I have written a bash script which is in following format: #!/bin/bash start=$(date +%s) inFile="input.txt" outFile="output.csv" rm -f $inFile $outFile while read line do -- Block of Commands done < "$inFile" end=$(date +%s) runtime=$((end-start)) echo "Program has finished execution in $runtime seconds." ...
GNU parallel is made for just this sort of thing. You can run your script many times at once, with different data from your input piped in for each one: cat input.txt | parallel --pipe your-script.sh By default it will spawn processes according to the number of processors on your system, but you can customise that wi...
Multi-Threading/Forking in a bash script
1,465,441,289,000
I have a service which I am calling from another application. Below is my service URL which I am calling - http://www.betaservice.domain.host.com/web/hasChanged?ver=0 I need to do some load test on my above service URL in multithreaded way instead of calling sequentially one by one. Is there any way from bash shell...
I wouldn't call it multithreading as such but you could simply launch 70 jobs in the background: for i in {1..70}; do wget http://www.betaservice.domain.host.com/web/hasChanged?ver=0 2>/dev/null & done That will result in 70 wget processes running at once. You can also do something more sophisticated like this li...
How to call a service URL from bash shell script in parallel?
1,465,441,289,000
my server has been running with Amazon Ec2 linux. I have a mongodb server inside. The mongodb server has been running under heavily load, and, unhappily , I've ran into a problem with it :/ As known, the mongodb creates new thread for every client connection, and this worked fine before. I don't know why, but MongoDB ...
Your issue is the max user processes limit. From the getrlimit(2) man page: RLIMIT_NPROC The maximum number of processes (or, more precisely on Linux, threads) that can be created for the real user ID of the calling process. Upon encountering this limit, fork(2) fails with the error EAGAIN. Same for pthread_creat...
Linux max threads count
1,465,441,289,000
I am trying to copy files from machineB and machineC into machineA as I am running my below shell script on machineA. If the files is not there in machineB then it should be there in machineC for sure so I will try copying the files from machineB first, if it is not there in machineB then I will try copying the same f...
The obvious is: parallel -j 2 do_CopyInPrimary ::: "${PRIMARY_PARTITION[@]}" & parallel -j 2 do_CopyInSecondary ::: "${SECONDARY_PARTITION[@]}" & wait But this way the secondary does not wait for the primary to finish and it does not check if the primary was successful. Let us assume that $PRIMARY_PARTITION[1] corres...
How to launch two threads in bash shell script?
1,465,441,289,000
This is the reasoning for my question: I read this in a text book “Each CPU (or core) can be working on one process at a time.” I'm assuming that this used to be accurate but is no longer fully true. How does multi threading play into this? Or is this still true, can a cpu core on linux still only work on one process...
A single CPU handles one process at a time. But a "process" is a construct of an operating system; the OS calls playing a video in VLC a single process, but it's actually made up of lots of individual instructions. So it's not as if a CPU is tasked with playing a video and has to drop everything it was doing. A CPU ca...
Can a single core of a cpu process more than one process?
1,465,441,289,000
I'm running zgrep on a computer with 16 CPUs, but it only takes one CPU to run the task. Can I speed it up, perhaps utilize all 16 cores? P.S The IO is just fine, I could just copy the gzipped file into memory disk
You can do as @UlrichDangel suggested in the comments and replace the executable gzip with pigz. If you want something a little less invasive you can also create functions for gzip and gunzip and add them to your $HOME/.bashrc file. gzip() { pigz "$@" } export -f gzip gunzip() { unpigz "$@" } export -f gunzip Now ...
Speed up zgrep on a multi-core computer
1,465,441,289,000
TLDR When spinning up multiple docker containers in which I run npm ci, I start getting pthread_create: Resource temporarily unavailable errors (less than 5 docker containers can run fine). I deduce there is some kind of thread limit somewhere, but I cannot find which one is blocking here. configuration a Jenkins ins...
I have found a way to get access to more than 4096 threads. My docker container is a centos7 image; which has by default a user limit set to 4096 processes; as defined in /etc/security/limits.d/20-nproc.conf : # Default limit for number of user's processes to prevent # accidental fork bombs. # See rhbz #432903 for rea...
'pthread_create: Resource temporarily unavailable' when running multiple docker instances
1,465,441,289,000
here is a shell script which takes domain and its parameters to find status code . this runs way faster due to threading but misses lot of requests. while IFS= read -r url <&3; do while IFS= read -r uri <&4; do urlstatus=$(curl -o /dev/null --insecure --silent --head --write-out '%{http_code}' "${url}""${uri}...
You are experiencing the problem of appending to a file in parallel. The easy answer is: Don't. Here is how you can do it using GNU Parallel: doit() { url="$1" uri="$2" urlstatus=$(curl -o /dev/null --insecure --silent --head --write-out '%{http_code}' "${url}""${uri}" --max-time 5 ) && echo "$url $u...
Bash script multithreading in curl commands
1,465,441,289,000
#!/bin/bash while IFS="," read ip port; do ruby test.rb "http://$ip:$port/"& ruby test.rb "https://$ip:$port/"; done <test1.txt How would i do this multithreading? if i do more lines divided by & it only runs the same command with the same ip&port more times, i want it to run with next ip&port nor the same ...
tr ',' ':' <test1.txt | xargs -P 4 -I XX ruby test.rb "http://XX/" Assuming that the test1.txt file contains lines like 127.0.0.1,80 127.0.0.1,8080 then the tr would change this to 127.0.0.1:80 127.0.0.1:8080 and the xargs would take a line at a time and replace XX in the given command string with the contents of t...
How to bash multithread?
1,465,441,289,000
I'm a student who wants to benchmark a NGS pipeline, monitoring performance according to how many cores it has allocated to it and the size of the input file. For this reason, I wrote a bash script to call it multiple times with different nr_of_cores parameters and input files, noting down completion time and other st...
Assuming that you can afford to tell the system to run it at some later time, and the sysadmin is sensible and has at installed, you can use the following to get it to run when load levels are low enough (zero by default, but the sysadmin can set any arbitrary value for the threshold): batch << EOF <command> EOF Othe...
How to delay bash script until there's enough idle cores to run it?
1,465,441,289,000
I am running a command (pngquant to be precise: https://github.com/pornel/pngquant) in a terminal window. I noticed, that if I open 4 terminal windows, and run pngquant command in each of them, I get 4x speed increase, effectively compressing 4 times as many images in the same time as before. So I used this approach a...
Both moreutils parallel and GNU parallel will do this for you. With moreutils' parallel, it looks like: parallel -j "$(nproc)" pngquant [pngquant-options] -- *.png nproc outputs the number of available processors (threads), so that will run available-processors (-j "$(nproc)") pngquants at once, passing each a single...
run command on multiple threads
1,465,441,289,000
None of the command-line shells that I am aware of are multithreaded. In particular, even those shells that support "job control" (Control-Z, bg, fg, etc) do so via facilities (namely, fork, exec, signals, pipes and PTYs) that predate Unix threads. Nor is Emacs multithreaded even though it is able to "do many things ...
$ ps -eLf UID PID PPID LWP C NLWP STIME TTY TIME CMD root 1 0 1 0 1 19:25 ? 00:00:00 init [4] ... root 1699 1 1699 0 1 19:25 ? 00:00:00 /usr/bin/kdm root 1701 1699 1701 8 2 19:25 tty10 00:13:10 /usr/bin/X :1 vt10 ... root 1701 ...
Is any part of the X.org software multithreaded?
1,465,441,289,000
The question refers to the output of a multi-threaded application, where each thread merely prints its ID (user assigned number) on the standard output. Here all threads have equal priority and compete for CPU quota in order to print on the standard output. However, running the same application a sufficiently large nu...
This is 100% normal with respect to threading on any and all operating systems. The documentation for your thread library, any examples and tutorials you may find, etc. are likely to make a point of this as it is often confusing to people when they are learning the ropes of threading. Threads are by default (and by d...
What makes the Linux scheduler seem unpredictable?
1,465,441,289,000
I was wondering how many processes can I create on my machine (x64 with 8Gb of RAM and running Ubuntu). So I made simple master process which was continiously creating child processes, and that child processes were just sleeping all the time. I ended with just 11-12k processes. Then I switched processes to threads and...
I think you hit either a number of processes limit or a memory limit. When I try your program on my computer and reach the pid == -1 state, fork() returns the error EAGAIN, with error message: Resource temporarily unavailable. As a normal user, I could create approx 15k processes. There are several reasons this EAGA...
What is a limit for number of threads?
1,465,441,289,000
When executing ps command in my Linux system i see some user processes twice (different PID...). I wonder if they are new processes or threads of the same process. I know some functions in standard C library that could create a new process such fork(). I wonder what concrete functions can make a process appear twice w...
Little bit confusing. fork is a system call which creates a new process by copying the parent process' image. After that if child process wants to be another program, it calls some of the exec family system calls, such as execl. If you for example want to run ls in shell, shell forks new child process which then calls...
Which system calls could create a new process?
1,465,441,289,000
I'm currently studying Linux. I know the thread is a kind of lightweight process on Linux. But I wonder to know where the thread stack space comes from. The stack of the thread is private. It is independent of the process stack. Based on my search, some people said thread stack was created by mmap(). And also, some pe...
As far as the Linux kernel is concerned, threads are processes with some more sharing than usual (e.g. their address space, their signal handling, and their process id, which is really their thread group id). When a process starts, it has a single thread, with a stack etc. When that thread starts another thread, it’s ...
Does thread stack comes from the memory mapping segment of the process on Linux?
1,465,441,289,000
I am using Ubuntu 20.04 LTS. The kernel version is 5.4.0-42. Here is an example program: // mre.c // Compile with: cc -o mre mre.c -lSDL2 #include <stdio.h> #include <SDL2/SDL.h> int main(void) { SDL_Init(SDL_INIT_VIDEO); // Doesn't work without SDL_INIT_VIDEO getchar(); } When I look at the running program ...
Those threads are used for the mesa disk cache: util_queue_init(&cache->cache_queue, "disk$", 32, 4, UTIL_QUEUE_INIT_RESIZE_IF_FULL | UTIL_QUEUE_INIT_USE_MINIMUM_PRIORITY | UTIL_QUEUE_INIT_SET_FULL_THREAD_AFFINITY); https://sources.debian.org/src/mesa/22.0.3...
What are these threads named disk$0, disk$1, etc.?
1,465,441,289,000
I don't know where I can find more information about crontab, so I ask here. Crontab is it multithread? How does it work?
Probably not. All cron has to do is (to express it simplified) watch until it is time to run one job or the other, and if so, fork a process which runs that job and periodically check if the job is finished in order to clean it up. MT could be used for this waiting, but I think that would be overkill. With the wait()/...
Is crontab multithread? [closed]
1,465,441,289,000
From the book Advanced programming in the Unix environment I read the following line regarding threads in Unix like systems All the threads within a process share the same address space, file descriptors, stacks, and process-related attributes.Because they can access the same memory,the threads need to synchroniz...
In the context of a Unix or linux process, the phrase "the stack" can mean two things. First, "the stack" can mean the last-in, first-out records of the calling sequence of the flow of control. When a process executes, main() gets called first. main() might call printf(). Code generated by the compiler writes the addr...
What is meant by stack in connection to a process?
1,465,441,289,000
I am using the following grep script to output all the unmatched patterns: grep -oFf patterns.txt large_strings.txt | grep -vFf - patterns.txt > unmatched_patterns.txt patterns file contains the following 12-characters long substrings (some instances are shown below): 6b6c665d4f44 8b715a5d5f5f 26364d605243 717c8a919a...
A much more efficient answer that does not use grep: build_k_mers() { k="$1" slot="$2" perl -ne 'for $n (0..(length $_)-'"$k"') { $prefix = substr($_,$n,2); ...
Boosting the grep search using GNU parallel
1,536,393,554,000
I have a bug in my Linux app that is reproducable only on single-core CPUs. To debug it, I want to start the process from the command line so that it is limited to 1 CPU even on my multi-processor machine. Is it possible to change this for a particular process, e.g. to run it so that it does not run (its) multiple thr...
You can use taskset from util-linux. The masks may be specified in hexadecimal (with or without a leading "0x"), or as a CPU list with the --cpu-list option. For example, 0x00000001 is processor #0, 0x00000003 is processors #0 and #1, 0xFFFFFFFF is processors #0 through #3...
Run process as if on a single-core machine to find a bug
1,536,393,554,000
I am working on a cluster machine that uses the Slurm job manager. I just started a multithreaded code and I would like to check the core and thread usage for a given node ID. For example, scoreusage -N 92512 were "scoreusage" is the command that I am unsure of.
It's been a few years since I ran a slurm cluster, but squeue should give you what you want. Try: squeue --nodelist 92512 -o "%A %j %C %J" (that should give your jobid, jobname, cpus, and threads for your jobs on node 92512) BTW, unless you specifically only want details from one particular node, you might be better...
Check CPU/thread usage for a node in the Slurm job manager
1,536,393,554,000
Modern HDDs all are "Advanced Format" ones, e.g. by default they report a logical/physical sector size of 512/4096. By default, most Linux formatting tools use a block size of 4096 bytes (at least that's the default on Debian/EXT4). Until today, I thought that this was kind of optimized : Linux/EXT4 sends chunks of 4K...
Following @Tomes advice, I'm trying to answer my own question, based on my comment exchange with @user10489. Of course I am no expert on this matter, so don't hesitate to amend or correct my statements if needed. But first, a clarification, because on a lot of websites people confuse block size and sector size : A bl...
Are there any benefits in setting a HDD's logical sector size to 4Kn?
1,536,393,554,000
I am trying my hands on clone() system call to create a Thread. However program is terminating itself as it return from t2_thread() function. Why is this behaviour? What am I missing? #define _GNU_SOURCE #include<sys/syscall.h> #include<stdio.h> #include<unistd.h> #include<stdlib.h> #include<errno.h> #include<sched.h>...
In versions of GNU libc prior to 2.26 and on some architectures including x86_64, upon return from the function passed to clone(), the libc would eventually call exit_group() (with the returned value as argument which you don't pass hence the random 16) which would cause all threads (the whole process) to terminate. I...
Why is this code exiting with return code 16?
1,536,393,554,000
I read that their is 1:1 mapping of user and kernel thread in linux What is the difference between PTHREAD_SCOPE_PROCESS & PTHREAD_SCOPE_SYSTEM in linux if kernel is considering every thread like a process then there will not be any performance difference? Correct me I'm wrong
According to the man page: Linux supports PTHREAD_SCOPE_SYSTEM, but not PTHREAD_SCOPE_PROCESS And if you take a look at the glibc's implementation: 0034 /* Catch invalid values. */ 0035 switch (scope) 0036 { 0037 case PTHREAD_SCOPE_SYSTEM: 0038 iattr->flags &= ~ATTR_FLAG_SCOPEPROCESS; 0039 ...
Pthread scheduler scope variables?
1,536,393,554,000
I am computing Monte-Carlo simulations using GNU Octave 4.0.0 on my 4-core PC. The simulation takes almost 4 hours to compute the script for 50,000 times (specific to my problem), which is a lot of time spent for computation. I was wondering if there is a way to run Octave on multiple cores simultaneously to reduce th...
GNU Parallel will not do multithreading, but it will do multiprocessing, which might be enough for you: seq 50000 | parallel my_MC_sim --iteration {} It will default to 1 process per CPU core and it will make sure the output of two parallel jobs will not be mixed. You can even put this parallelization in the Octave s...
Run GNU Octave script on multiple cores
1,536,393,554,000
Platform information: OpenBSD 6.2 amd64 $ rsync --version rsync version 3.1.2 protocol version 31 I'm trying to sync a large directory (4TB) using the following daily.local file (for Linux admins, this is essentially a cron daily task): #!/bin/sh # Sync the primary storage device to the backup disk /usr/local/bin/rsy...
One way round this problem (if the backup directory is on it's own partition) is to leave the volume unmounted, mounting just before starting the rsync command. This negates the need to use flock and may have the benefit of prolonging drive longevity/reducing power consumption. /etc/fstab: add the noauto option to the...
Stop rsync scheduled task race condition (large directory, small time interval)
1,536,393,554,000
Due to an unpredicted scenario I am currently in need of finding a solution to the fact that an application (which I do not wish to kill) is slowly hogging the entire disk space. To give more context I have an application in Python that uses multiprocessing.Pool to start 5 threads. Each thread writes some data to its...
If you move a file to a different filesystem, what happens under the hood is that the current contents of the file are copied and the original file is deleted. If the program was still writing to the file, it keeps writing to the now-deleted file. A deleted-but-opened file is in fact not deleted, but merely detached (...
Linux - preventing an application from failing due to lack of disk space
1,536,393,554,000
What does it mean when threads are time-sliced? Does that mean they work as interrupts, don't exit while routine is not finished? Or it executes one instruction from one thread then one instruction from second thread and so on?
Time-sliced threads are threads executed by a single CPU core without truly executing them at the same time (by switching between threads over and over again). This is the opposite of simultaneous multithreading, when multiple CPU cores execute many threads. Interrupts interrupt thread execution no matter of technolog...
Threads vs interrupts
1,536,393,554,000
I am trying to copy files from machineB and machineC into machineA as I am running my below shell script on machineA. If the files is not there in machineB then it should be there in machineC for sure so I will try copying the files from machineB first, if it is not there in machineB then I will try copying the same f...
The error is typically caused by too many ssh/scp starting at the same time. That is a bit odd as you at most run 4. That leads me to believe /etc/ssh/sshd_config:MaxStartups and MaxSessions on $FILERS_LOCATION_1+2 is set too low. Luckily we can ask GNU Parallel to retry if a command fails: do_Copy() { el=$1 PRIMS...
How to copy in two folders simultaneously using GNU parallel by spawning multiple threads?
1,536,393,554,000
A Linux thread or forked process may change its name and/or its commandline as visible by ps or in the /proc filesystem. When using the python-setproctitle package, the same change occurs on /proc/pid/cmdline, /proc/pid/comm, the Name: line of /proc/pid/status and in the second field of /proc/pid/stat, where only cmdl...
All three entries are defined close together in the kernel source: comm, stat, and status. Working forwards from there, comm is handled by comm_show which calls proc_task_name to determine the task’s name. stat is handled by proc_tgid_stat, which is a thin wrapper around do_task_stat, which calls proc_task_name to det...
Thread Name: Is /proc/pid/comm always identical to the Name: line of /proc/pid/status and the second field of /proc/pid/stat?
1,536,393,554,000
So I have a 100GB text files And I want to split it into 10000 files. I used to do such tasks with something like: split -l <number of lines> -d --additional-suffix=.txt bigfile small_files_prefix But I tried to do that with this one and I monitored my system and realized that it wasn't using much memory or CPU so I ...
Even with SSDs the bottleneck of splitting files is I/O. Having several processes / threads for that will not gain performance and often be much slower. In addition if you want to split on newlines only then it is not clear in advance from where to where each thread has to copy. You would probably have to write a spec...
How to split a file to multiple files with multiple threads?
1,536,393,554,000
Is a Linux process considered a thread? For example, if I write a simple c program that calls pthread_create to create a new thread in main(), does that mean that I now have 2 threads, one for main() and the newly created one? Or does only the spawned thread count as a thread but not the main() process? I was wonderi...
From man pthreads in my computer In addition to the **main (initial) thread**, and the threads that the program creates using pthread_create(3), the implementation creates a "manager" thread. This thread handles thread creation and termination. (Problems can result if this thread is inad‐ vertently killed....
Is each process considered a thread?
1,536,393,554,000
I recently attended an embedded Linux course that stated that uClibc does not support the use of pthreads and that it only supports linuxthreads. Furthermore, the course instructor implied that linuxthreads were next to useless. However, when reading a number of online articles, the implication is that they are in fac...
Starting with version 0.9.32 (released 8 june 2011), uClibc is supporting NPTL for the following architectures: arm, i386, mips, powerpc, sh, sh64, x86_64. Actually, both are an implementation of pthreads and will provide libpthread.so.
Does uClibc support using pthreads?
1,536,393,554,000
Environment: OS --debian + python3. All the output info below ommit unimportant. Get my computer's cpu info with cat /proc/cpuinfo : cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model name : Intel(R) Celeron(R) CPU G1840 @ 2.80GHz physical id : 0 siblings : 2 core id : 0 cpu cor...
Sometimes process id=thread id. Show my code. python3 mthreads.py 7761 cat /proc/7761/status|grep Threads Threads: 2 pstree -p 7761 python3(7761)───{python3}(7762) LWP means light weight process (thread) ID of the dispatchable entity (alias spid, tid) ,NLWP means number of lwps (threads) in the process in man...
How to comprehend Cpus_allowed and thread id number?
1,536,393,554,000
I have a test file that looks like this 5002 2014-11-24 12:59:37.112 2014-11-24 12:59:37.112 0.000 UDP ...... 23.234.22.106 48104 101 0 0 8.8.8.8 53 68.0 1.0 1 0.0 0 68 0 48 Each line contains a source ip and destination ip. Here, source ip is 23.234.22.106 and destination ip is 8.8.8.8. I am doing ip lookup for each...
Adjust -jXXX% as needed: PARALLEL=-j200% export PARALLEL arin() { #to get network id from arin.net i="$@" xidel http://whois.arin.net/rest/ip/$i -e "//table/tbody/tr[3]/td[2] " | sed 's/\/[0-9]\{1,2\}/\n/g' } export -f arin iptrac() { # to get other information from ip-tracker.org j="$@" ...
Multi processing / Multi threading in BASH
1,536,393,554,000
I have a program that spawns multiple threads, all of which do fairly intensive IO, running on the background. I want to set the scheduling class to idle so that it doesn't clog up the system; however, ionice -c3 -p <PID>, where <PID> is the process ID, does not have the desired effect. Although the scheduling class f...
ionice can take a process group ID as an argument (-P switch), which, obviously, affects all processes (and threads) in the given process group. Once can find the process group ID by looking at the 5th field of /proc/<PID>/stat (or using ps). This setting is a bit more coarse than what I really wanted, but works well ...
Set ionice for a multi-threaded application
1,536,393,554,000
I found all processes on my machine to only run on a single core and their core affinity set to 0. Here is a small python script which reproduces this for me: import multiprocessing import numpy as np def do_a_lot_of_compute(a): for i in range(1000): a = a * np.random.randn(123789) return a if __na...
The problem was related to SLURM and PBS setting the core affinity based on the number of requested cores. In SLURM adding the following line enables the use of all cores: #SBATCH --cpus-per-task=8
All processes running on the same core
1,536,393,554,000
I created a bash function to "automagically" connect on our switches and retrive their startup-config using the expect command. I have to use expect because this switch does not accept the ssh user@host fashion and ask me again for the User and Password tuple. This is the function that i created to manage those backup...
The solution was the tip that @devnull gave at the comments: Execute each funcion on background # Trata comentários na lista de switches egrep -v '(^#|^\s*$|^\s*\t*#)' $LISTA_SWITCHES | while read IP SWNOME SERVER TIPO do if [ "$TIPO" = core ]; then pc6248 & elif [ "$TIPO" = dep ]; the...
Multiplex bash function execution
1,536,393,554,000
I have a c++ program that is multithreaded. I believe the throughput would increase if I could run it in the other computers connected by a switch. All of then are using the same OS(Ubuntu). Is there a way I can do it without changing the code? If I need to change the code what should I look for?
This is not generally possible without changing the code. A multi-threaded program will make use of the processors on a single computer. As soon as you want to run the same program across a network of connected machines and have the various instances of the program communicate with each other, the code must do explici...
Run C++ program across computer on network
1,536,393,554,000
I am new to threading , and I wanted to test my newly acquired skills, with a simple task, create an image using multiple threads, the interesting part is that , on a single thread , the program runs faster , than using 4 threads (which is my most efficient, parallel thread runnning capacity I believe ) I have an i3 ...
The mutex is a red herring -- it is local to the function, and so it's not actually locking anything since there ends up being a separate mutex for each thread. In order to actually lock, you would need to move the mutex variable out of create_image. However, the writes to the image are independent, so it locking isn'...
Why is my program slower, despite using more threads? [closed]
1,536,393,554,000
I am running a benchmark to figure out the number of jobs I should allow GNU Make to use in order to have optimal compile time. To do so, I am compiling Glibc with make -j<N> with N an integer from 1 to 17. I did this 35 times per choice of N so far (35*17=595 times in total). I am also running it with GNU Time to det...
Because your graph showing the global efficiency provides the correct answer to your quest, I'll try to focus on explanations. A/ EFFICIENCY \ JOBS PLACEMENT Theoretically, (Assuming all CPUs idle at make launch time and no other task running and no i job has already completed when launching the n-th > i), we may expe...
Spike in number of page faults with make -j`nproc`
1,536,393,554,000
I'm trying to compress a large archive with multi-threading enabled, however, my system keeps freezing up and runs out of memory. OS: Manjaro 21.1.0 Pahvo Kernel: x86_64 Linux 5.13.1-3-MANJARO Shell: bash 5.1.9 RAM: 16GB |swapon| NAME TYPE SIZE USED PRIO /swapfile file 32G 0B -2 I've tried th...
From man xz: Memory usage Especially users of older systems may find the possibility of very large memory usage annoying. To prevent uncomfortable surprises, xz has a built-in memory usage limiter, which is disabled by default. The memory usage limiter can be enabled with the command line option --memlimit=limit. ...
xz: OOM when compressing 1TB .tar
1,536,393,554,000
How many threads should be used to process a million files? How yould you justify your answer? This is a question from an OS exam from last year and I'm courious how you guys think. I think that 10.000 threads and each one of them to process 100 files would be a good ratio.
Usually I/O is the limit. It does not make sense to have so many threads that they are waiting for I/O. You might define the optimum ratio so that n CPU cores are working full time and I/O is at 100%. The optimum number of threads is then defined by the ratio of the time it takes to process a file to the time it takes...
Threads to process a million files [closed]
1,549,906,341,000
lscpu gives: Thread(s) per core: 2 Core(s) per socket: 32 When running an intensive 32-threads process, why does htop show almost 100% CPU activity on #1-32, but very little activity on #33-64? Why aren't the process's 32 threads distributed evenly among CPUs #1-64?
In Linux there is a scheduler. Some systems will push work to faster/cooler/more-efficient cores but the default behavior is an ordered stack. The software you are running needs to take advantage of multiple cores for any benefit to be had, so it may be that your workload can only be split into 32 threads by your choi...
Distribution of threads among CPUs?
1,549,906,341,000
What are the advantages of using threads on single core, does that makes sense to use multithreading on single core?
There is far too little context here to give a good answer, but for most reasonable contexts the answer is "probably yes". The operating system itself runs many things in parallell on that single core, after all, and you'd be pretty darn annoyed if you had to wait for some web page to finish loading before your mouse ...
Two or more threads on single core [closed]
1,549,906,341,000
I am running my shell script on machineA which copies the files from machineB and machineC to machineA. If the file is not there in machineB, then it should be there in machineC for sure. So I will try to copy file from machineB first, if it is not there in machineB then I will go to machineC to copy the same files. ...
Doing several copies in parallel is rarely useful: whether the limiting factor is network bandwidth or disk bandwidth, you'll end up with N parallel streams, each going at 1/N times the speed. On the other hand, when you're copying from or to multiple sources (here B and C), then there is an advantage to doing the cop...
How to copy three files at once instead of one file at a time in bash shell scripting?
1,549,906,341,000
I'm trying to install PHP from source code on my Ubuntu 12.04 VPS. I'm installing PHP like this: Download the latest version from the php.net website. Configure it using the parameters below. Install any dependencies when necessary. (libxxxxx-dev) Then do a make Then a make install Move the php.ini file and the fpm c...
There are several alternatives here: Add the --prefix=/usr/local to the configure script (assuming this is what PHP uses) or otherwise ensure that your PHP is installed to /usr/local. This would mean that you would have your own build of PHP installed alongside the system one. Since, for example, /usr/local/bin takes...
Make Ubuntu acknowledge that a custom built version of PHP is installed
1,549,906,341,000
I have downloaded a .jar file and am using java with it, and it seems multithreaded, which is great ... unless I don't want it to be multithreaded, or unless I want to use only N threads with it. Is there a way, in java, to specify how many threads you want to run a .jar file with without having access to the source c...
As it turns out the .jar file I downloaded was single threaded, but Java was using multithreaded garbage collection. To change the number of threads that Java uses for GC, I use java -XX:ParallelGCThreads=2 which fixed the problem.
Limit max thread use for multithreaded java app
1,549,906,341,000
On my machine mkfs.ntfs is slow and results in massive use of resources, preventing me from using the machine for anything else. According to top it (or rather directly related zvol processes) is using 80-90% of every thread available, even threads that were already in use by other processes (such as virtual machines)...
It should not not take that long for one single case unless you tell it to zeroise the partition and check for bad sectors (and this is the default at least in my version). It is a good idea to check for bad sectors, but you can skip it with the option -f sudo mkfs.ntfs -f /dev/zd16 -c 8192
massive resource consumption during mkfs.ntfs on a zvol, why (and how can I limit this)?
1,549,906,341,000
The motivation behind this question arises from exploring the Intel Galileo gen2 board which has a single threaded processor. I'm looking for a conceptual explanation on what does that mean for all the userspace applications that rely on the existence of threading? Does this mean that the kernel needs to be patched ...
Multi-tasking systems handle multiple processes and threads regardless of the number of processors or cores installed in the system, and the number of "threads" they handle. Multi-tasking works using time-slicing: the kernel and every running process or thread each get to spend some time running, and then the system s...
Multithreaded applications on a single threaded CPU?
1,549,906,341,000
Because of hyper-threading, my CPU has 2 logical processors per core. If I understand the premise of hyper-threading correctly, it allows each core to have a separate cache and instruction pointer for 2 separate threads simultaneously, but does not allow for simultaneous execution of 2 threads by a single core. As su...
It depends. In general running one software thread per CPU thread will give the best performance. I regularly see speedups of 10% over running one software thread per CPU core - so instead of having one software thread running at 100%, I have two software threads each running at 55%. But I have also seen better perfor...
Does a process filling all logical cores have a negative impact on performance? [closed]
1,549,906,341,000
When I "flood" my CPU with 8 high priority (nice=-20) OS threads (the number of cores I have), operation becomes "halty" for obvious reasons, but is still usable. Note that when I say "high-priority thread" I mean a thread that was spawned by the same high-priority process. However doing something like 64 threads will...
no such relationship exists, at least directly. Remember that a nice value is a priority. The scheduling ends up the same whether you have N threads of niceness 0, or N threads of niceness 10, or N threads of niceness -10. Whether or not a system remains responsive depends on how much time it has to care about slow th...
Relationship between number of cores and ability to run processes with higher nice values?
1,549,906,341,000
I wrote a simple program with a thread which runs on a CPU core. It spins kind of aggressively, and it takes 100% of the CPU core. I can see that with top + 1. After N minutes, I would like to be able to know: How many times has the kernel preempted (interrupted) my running thread?
That's what Linux has event hooks for, and you can use them with perf Gathering Statistics I'd start with something simple: sudo perf state -e sched:sched_switch yourprogram Try this: busyloop.c #include <stdint.h> int main() { for (volatile uint_fast64_t i = 0; i < (1ULL<<34); ++i) { } } compile: gcc -O3 ...
How many times has my process been preempted?
1,549,906,341,000
I am reading the following paper. In the paper, the authors argue that Unix/Linux "has struggled for a decade to support multiprocessors in a single node" in the last paragraph of the first page. I don't understand what this sentence actually means. Why is it so hard to support multiprocessors in Unix/Linux? Is it the...
In the “Age” paragraph a few lines above your quote, the paper gives four references: Linux has struggled for a decade to fully leverage multi-cores [14, 20, 22, 34]. Those references are, respectively: Scaling in the Linux Networking Stack (part of the kernel documentation), which describes various techniques to i...
Linux multiprocessors support
1,549,906,341,000
I have a function that has to process all files in a set of directories (anything between 5-300 files). The number of parallel threads to be used is user-specified (usually 4). The idea is to start the function in 4 separate threads. When one thread returns, I have to start processing the next (5th) file and so on til...
It sounds like you want a work queue. You could populate that queue with the collection of files that need to be processed, with a function to dequeue an item from the queue that does the necessary locking to prevent races between threads. Then start howmany ever threads you want. Each thread will dequeue an item f...
Schedule jobs from a queue onto multiple threads
1,549,906,341,000
I've been experimenting with lightweight processes. Basically calling the clone function and assigning a new PID to the cloned LWP. This works fine it lets me identify all the children threads of those LWP's. The tiny problem that I ran into is performance. It degrades quite a bit ( processing slows down by 30% ). Now...
After some days of testing I've found out the following. The Futexes come from sharing memory buffers between the threads ( unfortunately this is unavoidable ), the threads run math models on quite high frequencies. The futex are directly impacting the execution latency, but not linearly and it is more dependent if t...
Lightweight processes behavior with an new PID
1,549,906,341,000
I'm learning about critical section of multithreading. I have a general statement: In a single CPU system, disable interrupt is a solution of race condition. But I also learn from another site that Threads generally don't interrupt each other. So how can disable interrupt prevents race condition? Can this possibl...
The first statement is true, but it is meaningless on Linux or on any systems. It is because most drivers or hardware handling is done by interrupts and it is impossible do it differently. For example, if a packet arrives from the network, the CPU will know it by an interrupt initiated by the network card. But these i...
Do threads interrupt each other in Linux? [closed]
1,549,906,341,000
A LWP is a data structure placed between user thread and kernel thread, and appears as a virtual processor to user thread library. So, the minimum number of LWP required in many to many model of threading is the number of concurrent blocking system calls. Please explain why is it so?
A Lightweight Process is (in Unix and Unix-like) a process that runs in user space over a single kernel thread and shares its address space and resources with other LWPs of the same user process. A system call is an invocation of kernel functionality from user space. When a user process performs a system call, the cal...
Why an LWP (Light Weight Process) is required for each concurrent system call in many to many thread model?
1,549,906,341,000
I recently wrote a "study note" about Unix, and I made following proposition about multi-threaded processes: it will be almost impossible for the kernel to identify the thread that should receive SIGURG, when a TCP packet with "urgent" bit is received in the 3rd paragraph of section 1.1, and I'd like to fact check t...
On the 3 implementations I surveyed, Darwin, FreeBSD, and Linux, the main thread receives the signal. And if the main thread blocks it with a mask, no thread receives the signal.
Which thread receives SIGURG?
1,549,906,341,000
I have the following scenario, I have two programs running one in the background and one in front. The back program is doing some stuff for the front program. once the back program has done the necessary configuration it signals that it has finished the backup support for first program and the now the front program ne...
It is difficult to understand the requirement as specified, so first I will try to show where additional explanation may be needed. You have tagged this post with "Migration", so I assume these programs already exist and are known to work on some non-Linux architecture. The concepts of inter-process communication and ...
How to switch from one process to another process and kill the first process
1,396,455,789,000
When editing an authorised_keys file in Nano, I want to wrap long lines so that I can see the end of the lines (i.e tell whose key it is). Essentially I want it to look like the output of cat authorised_keys So, I hit Esc + L which is the meta key for enabling long line wrapping on my platform and I see the message to...
To see the word wrapping style you described, use nano's "soft wrapping": Esc+$. The Esc+L command you (and everyone) tried does "hard wrapping." Note on keystroke notation - if you are new to Linux, the notation Esc+$ means press and release Esc and then press $. The full key press sequence then is Esc, Shift+4. (I...
Long line wrapping in Nano
1,396,455,789,000
I have some long log files. I can view the last lines with tail -n 50 file.txt, but sometimes I need to edit those last lines. How do I jump straight to the end of a file when viewing it with nano?
Open the file with nano file.txt. Now type Ctrl + _ and then Ctrl + V
Nano - jump to end of file
1,396,455,789,000
A lot of the time I edit a file with nano, try to save and get a permission error because I forgot to run it as root. Is there some quick way I can become root with sudo from within the editor, without having to re-open and re-edit the file?
No, you can't give a running program permissions that it doesn't have when it starts, that would be the security hole known as 'privilege escalation'¹. Two things you can do: Save to a temporary file in /tmp or wherever, close the editor, then dump the contents of temp file into the file you were editing. sudo cp $TM...
Is it possible to save as root from nano after you've forgotten to start nano with sudo?
1,396,455,789,000
While trying to save a file out of Nano the other day, I got an error message saying "XOFF ignored, mumble mumble". I have no idea what that's supposed to mean. Any insights?
You typed the XOFF character Ctrl-S. In a traditional terminal environment, XOFF would cause the terminal to pause it's output until you typed the XON character. Nano ignores this because Nano is a full-screen editor, and pausing it's output is pretty much a nonsensical concept. As to why the wording is what it is, y...
What does "XOFF ignored, mumble mumble" error mean?
1,396,455,789,000
I have vim as default editor on my Mac and every time I run commands on Mac terminal, it automatically opens "vim". How can I set up "nano" instead and make sure the terminal will open "nano" every time is needed?
Set the EDITOR and VISUAL environment variables to nano. If you use bash, this is easiest done by editing your ~/.bashrc file and adding the two following lines: export EDITOR=nano export VISUAL="$EDITOR" to the bottom of the file. If the file does not exist, you may create it. Note that macOS users should probably ...
How can I set the default editor as nano on my Mac?
1,396,455,789,000
Is there a way to turn on line numbering for nano?
The only thing coming close to what you want is option to display your current cursor position. You activate it by using --constantshow (manpage: Constantly show the cursor position) option or by pressing AltC on an open text file.
Is there line numbering for nano?
1,396,455,789,000
CoreOS does not include a package manager but my preferred text editor is nano, not vi or vim. Is there any way around this? gcc is not available so its not possible to compile from source: core@core-01 ~/nano-2.4.1 $ ./configure checking build system type... x86_64-unknown-linux-gnu checking host system type... x86_6...
To do this on a CoreOS box, following the hints from the guide here: Boot up the CoreOS box and connect as the core user Run the /bin/toolbox command to enter the stock Fedora container. Install any software you need. To install nano in this case, it would be as simple as doing a dnf -y install nano (dnf has replace...
Is there any way to install Nano on CoreOS?
1,396,455,789,000
I can able to select multiple lines using Esc+A. After this, what shortcut(s) should I use to comment/uncomment the selected lines?
Esc3 (or Alt+3) will comment or uncomment the selected lines in recent versions of the nano editor (the version shipped with macOS is too old; install a newer version with e.g. Homebrew). The default comment character used is # (valid in many scripting languages). The comment character may be modified by the comment ...
How to comment multiple lines in nano at once?
1,396,455,789,000
Selecting lines in nano can be achieved using Esc+A. With multiple lines selected, how do I then indent all those lines at once?
Once you have selected the block, you can indent it using Alt + } (not the key, but whatever key combination is necessary to produce a closing curly bracket).
How to indent multiple lines in nano
1,396,455,789,000
Installed Debian Stretch (9.3). Installed Vim and removed Nano. Vim is selected as the default editor. Every time I run crontab -e, I get these warnings: root@franklin:~# crontab -e no crontab for root - using an empty one /usr/bin/sensible-editor: 25: /usr/bin/sensible-editor: /bin/nano: not found /usr/bin/sensible-e...
I found my own answer and so I'm posting it here, in case it helps someone else. In the root user's home directory, /root, there was a file alled .selected_editor, which still retained this content: # Generated by /usr/bin/select-editor SELECTED_EDITOR="/bin/nano" The content suggests that the command select-editor i...
How to get rid of "nano not found" warnings, without installing nano?
1,396,455,789,000
When using GNU's Nano Editor, is it possible to delete from the actual cursor position to the end of the text file? My workaround for now: keep pressed CtrlK (the delete full line hotkey). But this method is not so confortable on slow remote connections (telnet, SSH... etc).
According to Nano Keyboard Commands, you can do this with AltT: M-T Cut from the cursor position to the end of the file where the M is "alt" (referring to the ESC key). In the documentation, "cut" is another way of saying delete or remove, e.g., ^K Cut the current line and store it in the cutbuffer
Nano Editor: Delete to the end of the file
1,396,455,789,000
How can I search and replace horizontal tabs in nano? I've been trying to use [\t] in regex mode, but this only matches every occurrence of the character t. I've just been using sed 's/\t//g' file, which works fine, but I would still be interested in a nano solution.
In nano to search and replace: Press Ctrl + \ Enter your search string and hit return Enter your replacement string and hit return Press A to replace all instances To replace tab characters you need to put nano in verbatim mode: Alt+Shift+V. Once in verbatim mode, you can type any character in it'll be be accepted l...
Find and replace "tabs" using search and replace in nano
1,396,455,789,000
Is there a way to show or toggle non printing characters like newline or tab in nano? At first let's assume the file is plain ascii.
If it is not configured for "tiny", nano can display printable characters for tab and space, but it has no special provision for newline. This is documented in the manual: set whitespace "string" Set the two characters used to indicate the presence of tabs and spaces. They must be single-column characters. The defaul...
How to show non printing characters in nano
1,396,455,789,000
Which format (Mac or DOS) should I use on Linux PCs/Clusters? I know the difference: DOS format uses "carriage return" (CR or \r) then "line feed" (LF or \n). Mac format uses "carriage return" (CR or \r) Unix uses "line feed" (LF or \n) I also know how to select the option: AltM for Mac format AltD for DOS format ...
Use neither: enter a filename and press Enter, and the file will be saved with the default Unix line-endings (which is what you want on Linux). If nano tells you it’s going to use DOS or Mac format (which happens if it loaded a file in DOS or Mac format), i.e. you see File Name to Write [DOS Format]: or File Name to ...
GNU nano 2: DOS Format or Mac Format on Linux
1,396,455,789,000
Normally I want nano to replace tabs with spaces, so I use set tabstospaces in my .nanorc file. Occasionally I'd like to use nano to make a quick edit to makefiles where I need real tab characters. Is there any way to dynamically toggle tabstospaces? Most of the other options have keys to toggle them, but I can't fi...
The shortcut that toggles tabstospaces is Meta+O (the letter O, not the number 0). (In earlier versions, it was Shift+Alt+Q or Meta+Q.) You will see the prompt changing to: [ Conversion of typed tabs to spaces disabled ] or [ Conversion of typed tabs to spaces enabled ] respectively. Since version 1.3.1, you can als...
Is it possible to easily switch between tabs and spaces in nano?
1,396,455,789,000
Why does ls | nano - open the editor in Ubuntu but close the editor and save a file to -.save in CentOS? How can I get nano in CentOS to remain open when reading stdin?
The feature wasn't added until version 2.2 http://www.nano-editor.org/dist/v2.2/TODO For version 2.2: Allow nano to work like a pager (read from stdin) [DONE] and CentOS6 uses nano-2.0.9-7 (http://mirror.centos.org/centos/6/os/x86_64/Packages/) If you decided you want the latest version, you can download from the ...
Piped input to nano
1,396,455,789,000
I am on Ubuntu 15.10 x64. When I am trying to edit server.js file, it is opening a blank nano editor and displaying "File server.js is being edited (by root with nano 2.4.2, PID xxxx); continue?" with options - Yes, No, Cancel. I copied a backup file on this file but still I am getting the same message. Could you ple...
Check with tools like ps and htop whether this other nano instance is still running. If it's not, there's most likely a hidden dotfile in the same folder which leads nano to believe that the other instance is still running (at least vim works this way, I don't use nano; try ls -lA and look for a file that begins with ...
File server.js is being edited (by root with nano 2.4.2, PID xxxx); continue?
1,396,455,789,000
I enjoy using nano as a respite from my usual GTK-based text editor. I like the simplicity of the interface, and using CTRL-K is the fastest way I know of to edit down long textfiles. However, I have one major gripe: whenever I justify text using CTRL-J, the editor prints the smug little message Can now UnJustify! -- ...
If you press Ctrl+U immediately after Ctrl+J, the justification is undone. Nano in fact tells you (the ^U shortcut description at the bottom changes from UnCut Text to UnJustify). No, I won't blame you for not noticing that. You can't unjustify if you've typed anything after Ctrl+J. Yes, that's pretty underwhelming (f...
How to 'UnJustify!' text in GNU nano
1,396,455,789,000
How to delete the complete word where cursor is positioned in nano text editor? Or if cursor is on white space, I assume it should delete the next word? Nano help shows these two functions but they are not bound to any shortcuts: Cut backward from cursor to word start Cut forward from cursor to next word start Those ...
Save this file to ~/.nanorc and ctrl+] cuts the word to the left, and ctrl+\ cuts right This works for me in nano version 2.5 bind ^] cutwordleft main bind ^\\ cutwordright main This works for me in nano version 2.9.3 bind ^] cutwordleft main bind ^\ ...
How to delete current word in nano text editor?
1,396,455,789,000
I have a docker container running debian linux (it's using the postgres image) that I'd like to install nano on so I can edit files. However, when I try # apt-get install nano the output I get is Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate packag...
So apparently there was an easy solution to this. I just needed to update first: # apt-get update # apt-get install nano
Trouble installing nano
1,396,455,789,000
I like nano and use it a lot. A related application is rnano. From the rnano manual: DESCRIPTION rnano runs the nano editor in restricted mode. This allows editing only the specified file or files, and doesn't allow the user access to the filesystem nor to a command shell. In restric...
I suspect the intended use-case for rnano (or nano -R) is to provide an editor usable in privileged scenarios, or with untrusted keyboard input. For example, if you want to give someone else an editor using your account — they wouldn’t be able to access other files. Likewise, it would be useful to limit the danger of ...
When should "rnano" be used in place of "nano"?
1,396,455,789,000
I've recently started using nano quite a bit for code editing. I've written custom syntax files, and I can include them in my local ~/.nanorc. However, I do work across multiple accounts, so I manually have to apply the include to each user's .nanorc. Is there a system-wide nanorc file I can edit, so the changes take ...
The system-wide nanorc file is at /etc/nanorc You can also add a .nanorc file to /etc/skel so all new users have a local nanorc file added to their home folder.
Is there a global nanorc?
1,396,455,789,000
Obviously there are at least two newline types: DOS and Unix. But does OS X have its own plaintext 'format'? I opened a text file in nano and was surprised to see: [ Read 26793 lines (Converted from Mac format) ] What is Mac format, how is it any different from a file written with a Unix tool like nano, and why does i...
It should be pointed out that Mac OS X uses \n a.k.a linefeed (0x0A) now, just like all other *nix systems. Only Mac OS versions 9 and older used \r (CR). Reference: Wikipedia on newlines.
Does OS X have its own line format?
1,396,455,789,000
Here we have some amazing tools: tmux, ranger, vim... Would be amazing to configure ranger to open the files (when text editable) in a tmux newpane? Is that easy and how it is done?
As of 2022, Python 2 is no longer supported. Here is what works for me on ranger 1.9.3 on macOS via Homebrew. map ef shell [[ -n $TMUX ]] && tmux split-window -h vim %f or map ef eval exec('try: from shlex import quote\nexcept ImportError: from pipes import quote\nif "TMUX" in os.environ: fm.run("tmux splitw -h vim "...
Tmux ranger integration: opening text files in new panes
1,396,455,789,000
I enabled syntax highlight in nano (PHP), but not happy with the default, I would like for example to have the comments displayed in very light grey. However, the documentation I found seems to suggest I can only write colors like "yellow", "red" etc. Is there a way to specify a color by its hex/RGB code? Is there a l...
nano is small. In this case, it limits the choices to the 8 predefined ANSI colors (plus bright/bold) so that it can use the predefined symbols from curses.h (such as COLOR_BLUE) as a guide to naming. Many terminals support 256 predefined colors; nano can't take advantage of them, but Vim can. Terminals which allow d...
Can I/how to specify colors in hex or RGB in nano syntax highlight config?
1,396,455,789,000
I am using nano on files with long lines. How could I scroll the nano window horizontally? Ex: ┌────────────┐ |Lorem ipsum |dolor sit amet, consectetur adipiscing elit, |sed do eiusm|od tempor incididunt ut labore et dolore magna aliqua. |Dolor sed vi|verra ipsum nunc aliquet bibendum enim. |In massa tem|por nec feu...
I'm pretty sure you cannot do that in nano. The closest you could get would be line wrapping, precisely "soft wrapping": Esc+$. This will wrap lines so you could see them all on the screen. Source: https://www.nano-editor.org/dist/v2.9/nano.html (search for --softwrap) You could get this kind of behaviour with vim, th...
Horizontal scrolling in nano editor?
1,396,455,789,000
In my attempt to get unique entries (read lines) out of a simple text file, I accidentally executed nano SomeTextFile | uniq. This "instruction" renders the shell (bash) completely (?) unresponsive/non-usable -- tested within from Yakuake and Konsole. I had to retrieve the process id (PID) (by executing ps aux | grep ...
As other answers have already explained, Ctrl+C doesn't kill Nano because the input of nano is still coming from the terminal, and the terminal is still nano's controlling terminal, so Nano is putting the terminal in raw mode where control characters such as Ctrl+C are transmitted to the program and not intercepted by...
Accidental `nano SomeFile | uniq` renders the shell unresponsive
1,396,455,789,000
I use nano as my standard editor for a file type it has no build in syntax-highlighting for LilyPond. It is nothing I really need, though I'm missing out quite a lot of white-space characters at the end of lines. Sure I could batch remove them as mentioned here in Strip trailing whitespace from files. But it should no...
You can enable this for all filetypes which don't already have syntax highlighting defined by adding the following lines to .nanorc: syntax "default" color ,green "[[:space:]]+$" syntax "default" sets the subsequent definitions for default syntax highlighting (i.e., where a filetype hasn't already been matched by som...
Nano - highlight trailing whitespaces
1,396,455,789,000
How can I highlight a given column using nano? I'm using a fairly large terminal but I would like a mark to know if my code exceeds the limit of let's say 80 characters.
Command-line options: nano -J 80 file nano --guidestripe 80 file Or add this to ~/.nanorc: set guidestripe 80 That information is to be found in the manual under section 3. Notice that feature is absent for versions of nano older than 4.0.
Line length marker in nano
1,396,455,789,000
The problem: I open a terminal (in Linux Mint, so mate-terminal) zsh is the shell Then I run tmux Edit a file with nano Scroll up and down that file with the cursor Issue: When scrolling down in nano, only the bottom half of the terminal window gets refreshed Issue: When scrolling up in nano, only the top half of the...
From the tmux FAQ: ****************************************************************************** * PLEASE NOTE: most display problems are due to incorrect TERM! Before * * reporting problems make SURE that TERM settings are correct inside and * * outside tmux. ...
Fixing scrolling in nano running in tmux in mate-terminal
1,396,455,789,000
Typically when I'm editing a small file over SSH I'll just open up nano. I look at my apache2 access.log a good bit. Since I don't have fail2ban or anything enabled on this box, I typically look at access.log.1 as well. I've noticed in my access.log.(#) a particular line always has an odd highlighting: GET /w00tw00t....
With those syntax-highlighting rules files, nano assumes that filenames ending in .1 - .9 are man pages. It's been quite a while since I edited a man page, but I'm pretty sure that in groff -man, .I is for italic and .Bis for bold.
Why does nano sometimes show colors over SSH?
1,396,455,789,000
I use nano as my favorite text editor. I was able to save my documents by pressing F3 + Enter. But is there a way to save the document directly by pressing some key, if I'm sure I would like to save the document to the same name as before?
You can try: Edit your .nanorc file Add line: set tempfile Now, after you finish editting your file, just press Ctrl + X, nano then quit and automatically save your file.
Is it posssible to save text in nano with one keypress