date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,659,734,788,000
When a executable file is run in a process, if the executable file is overwritten or deleted and then recreated by reinstallation, will the process rerun the new executable file? Does the answer to the question depend on whether the executable is run as a service/daemon in the process or not? the operation system, e.g. Linux, Unix, ...? whether the reinstallation is from an installer file (e.g. deb file on Ubuntu, msi on Windows) or from building its source code? Here are some examples: In Ubuntu, when a process runs an executable file, and when I overwrite the executable file, by manually reinstallation via configure, make, and make install on its source code, the process still continues to run the original executable file, instead of the new executable file. I heard that in Windwos 10, when a process runs an executable file as a service, if we reinstall the executable file via its msi installer file, then the service process will restart to run the new executable file. Is it the same or similar case for installation from .deb files on Ubuntu or Debian? Thanks.
It depends on the kernel and on the type of executable. It doesn't depend on how the executable was started or installed. On Linux: For native executables (i.e. binaries containing machine code, executed directly by the kernel), an executable cannot be modified while it's running. $ cp /bin/sleep . $ ./sleep 999999 & $ echo >sleep sh: 1: cannot create sleep: Text file busy It is possible to remove the executable (i.e. unlink it) and create a new one at the same path. Like any other case where a file is removed while it's still open, removing the executable doesn't affect the running process, and doesn't actually remove it from the disk until the file is no longer in use, i.e. until all running instances of the program exit. For scripts (beginning with #!), the script file can be modified while the program is running. Whether that affects the program depends on how the interpreter reads the script. If it reads the whole script into its own memory before starting to execute then the execution won't be affected. If the interpreter reads the script on demand then the execution may be affected; some implementations of sh do that. Many other Unix systems behave this way, but not all. IIRC older versions of Solaris allow modifying a native executable, which generally causes it to crash. A few Unix variants, including HP/UX, don't even allow removing a native executable that's currently running. Most software installation programs take care to remove an existing executable before putting a new one in place, as opposed to overwriting the existing binary. E.g. do rm /bin/target cp target /bin rather than just cp target /bin. The install shell command does things this way. This is not ideal though, because if someone tries to execute /bin/target while the cp process is running, they'll get a corrupt program. It's better to copy the file to a temporary name and then rename it to the final name. Renaming a file (i.e. moving it inside the same directory, or more generally moving it inside the same filesystem) removes the prior target file if one exists. This is how dpkg works, for example. cp target /bin/target.tmp mv /bin/target.tmp /bin/target
Will overwriting to an executable file affect a process which is running the original executable file?
1,659,734,788,000
I've noticed that when I mount a FAT filesystem on Linux, all of the files have their executable permissions set. Why is this? There's almost no chance that you can or want to directly execute any program found on a FAT file system, and having the executable bit implicitly set for all files seems annoying to me. I understand that FAT (and other filesystems as well) have no mode bits, and so the 777 mode I'm seeing on files is just simulated by the filesystem driver under Unix. My question is why 777 instead of 666?
FAT may not be a POSIX-style filesystem, that doesn't mean that you shouldn't be allowed to store executables on it and run them directly from it. Because FAT doesn't store POSIX permissions, the only way this can happen (easily) is if the default mode used for files allows their execution... In the past, when (V)FAT was still used as the main filesystem for other operating systems (DOS and Windows), and hard drives were smaller, it wasn't unusual to store Unix/Linux binaries on a FAT filesystem. (There's even a FAT variant which stores POSIX attributes in special files, so you could run Linux on a FAT filesystem.) Nowadays you can still end up doing so -- on USB keys for example. If you're worried about the security implications, there are a number of options you can use. noexec and nodev are probably already set for removable filesystems on your distribution; dmask and fmask allow you to specifically determine the modes used. showexec will only set the executable bits on files with .bat, .com or .exe extensions. (Note that a file's permissions and the ability to execute it are separate...)
Why does Unix set the executable flag for FAT file systems? [closed]
1,659,734,788,000
I am getting the same not found [No such file or directory] error when trying to execute a ksh script. Read tips about the PATH and running the script with a ./ in the posts here and here and tried but no luck. The script does exist under the directory from where I am trying to execute and has full permissions but gives the same error when run directly or with a ./. The first line within the script also has #!/usr/bin/ksh The error message is like below: -ksh: revenue_ext.ksh: not found [No such file or directory] However, other ksh scripts under the same directory run fine so am absolutely clueless about what could be wrong here. Any help would be greatly appreciated
I believe there may be some carriage returns causing this error here. I was able to reproduce the error successfully. Testing cat ksh_experiment.ksh #!/usr/bin/ksh echo "Hello" Now after providing the permissions when I ran the file, it produced the output successfully. Now as discussed over here, I inserted some carriage returns in my file. Now when I ran the script, I was getting the output as, ksh: ./ksh_experiment.ksh: not found [No such file or directory] Now, cat -v ksh_experiment.ksh too produced the same output. Also, if I typed vim ksh_experiment.ksh , a new file was getting opened. As discussed in the answer of the link that I provided, I removed the carriage returns using the command, perl -p -i -e "s/\r//g" ksh_experiment.ksh After fixing when I ran, I got the output as expected.
-ksh: revenue_ext.ksh: not found [No such file or directory]
1,659,734,788,000
Compiled a binary from the golang source, but it won't execute. I tried downloading the binary, which also didn't work. Permissions all seem to be right. Running the file from go for some reason works. Output of ~/go$ go run src/github.com/exercism/cli/exercism/main.go1: NAME: exercism - A command line tool to interact with http://exercism.io USAGE: main [global options] command [command options] [arguments...] Output of ~/go/bin$ ./exercism: bash: ./exercism: Permission denied Output of ~/go/bin$ ls -al: total 9932 drwxr-xr-x 2 joshua joshua 4096 Apr 28 12:17 . drwxr-xr-x 5 joshua joshua 4096 Apr 28 12:17 .. -rwxr-xr-x 1 joshua joshua 10159320 Apr 28 12:17 exercism Output of ~/go/bin$ strace ./exercism: execve("./exercism", ["./exercism"], [/* 42 vars */]) = -1 EACCES (Permission denied) write(2, "strace: exec: Permission denied\n", 32strace: exec: Permission denied ) = 32 exit_group(1) = ? +++ exited with 1 +++
Check that noexec is not in effect on the mount point in question. Or choose a better place to launch your script from. $ mount | grep noexec [ snip ] shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime) $ cat > /dev/shm/some_script #!/bin/sh echo hi $ chmod +x /dev/shm/some_script $ /dev/shm/some_script bash: /dev/shm/some_script: Permission denied $ mv /dev/shm_script . $ ./some_script hi noexec exists specifically to prevent security issues that come from having world-writable places storing executable files; you might put a file there, but someone else might rewrite it before you execute it, and now you're not executing the code you thought you were.
Despite execution privilege, getting permission denied
1,659,734,788,000
Assuming that you can't reach the internet nor reboot the machine, how can I recover from chmod -x chmod ?
1 - use a programming language that implements chmod Ruby: ruby -e 'require "fileutils"; FileUtils.chmod 0755, “chmod"' Python: python -c "import os;os.chmod('/bin/chmod', 0755)” Perl: perl -e 'chmod 0755, “chmod”' Node.js: require("fs").chmod("/bin/chmod", 0755); C: $ cat - > restore_chmod.c #include <sys/types.h> #include <sys/stat.h> int main () { chmod( "/bin/chmod", 0000755 ); } ^D $ cc restore_chmod.c $ ./a.out 2 - Create another executable with chmod By creating an executable: $ cat - > chmod.c int main () { } ^D $ cc chmod.c $ cat /bin/chmod > a.out By copying an executable: $ cp cat new_chmod $ cat chmod > new_chmod 3 - Launch BusyBox (it has chmod inside) 4 - Using Gnu Tar Create an archive with specific permissions and use it to restore chmod: $ tar --mode 0755 -cf chmod.tar /bin/chmod $ tar xvf chmod.tar Do the same thing but on the fly, not even bothering to create the file: tar --mode 755 -cvf - chmod | tar xvf - Open a socket to another machine, create an archive and restore it locally: $ tar --preserve-permissions -cf chmod.tar chmod $ tar xvf chmod.tar Another possibility would be to create the archive regularly and then editing it to alter the permissions. 5 - cpio cpio allows you to manipulate archives; when you run cpio file, after the first 21 bytes there are three bytes that indicate the file permissions; if you edit those, you're good to go: echo chmod | cpio -o | perl -pe 's/^(.{21}).../${1}755/' | cpio -i -u 6 - Dynamic loaders /bin/ld.so chmod +x chmod (actual paths may vary) 7 - /proc wizardry (untested) Step by step: Do something that forces the inode into cache (attrib, ls -@, etc.) Check kcore for the VFS structures Use sed or something similar to alter the execution bit without the kernel realising it Run chmod +x chmod once 8 - Time Travel (git; yet untested) First, let's make sure we don't get everything else in the way as well: $ mkdir sandbox $ mv chmod sandbox/ $ cd sandbox Now let's create a repository and tag it to something we can go back to: $ git init $ git add chmod $ git commit -m '1985' And now for the time travel: $ rm chmod $ git-update-index --chmod=+x chmod $ git checkout '1985' There should be a bunch of git-based solutions, but I should warn you that you may hit a git script that actually tries to use the system's chmod 9 - Fighting Fire with Fire It would be great if we could fight an Operating System with another Operating System. Namely, if we were able to launch an Operating System inside the machine and have it have access to the outer file system. Unfortunately, pretty much every Operating System you launch is going to be in some kind of Docker, Container, Jail, etc. So, sadly, that is not possible. Or is it? Here is the EMACs solution: Ctrl+x b > *scratch* (set-file-modes "/bin/chmod" (string-to-number "0755" 8)) Ctrl+j 10 - Vim The only problem with the EMACs solution is that I'm actually a Vim kind of guy. When I first delved into this topic Vim didn't have a way to do this, but in recent years someone made amends with the universe, which means we can now do this: vim -c "call setfperm('chmod', 'rwxrwxrwx') | quit"
How can I recover from a `chmod -x chmod`? [duplicate]
1,659,734,788,000
I following Michael's reply to see what executable formats my Ubuntu can recognize to execute $ ls -l /proc/sys/fs/binfmt_misc/ total 0 -rw-r--r-- 1 root root 0 Apr 19 16:11 cli -rw-r--r-- 1 root root 0 Apr 19 16:11 jar -rw-r--r-- 1 root root 0 Apr 19 16:11 python2.7 -rw-r--r-- 1 root root 0 Apr 19 16:11 python3.5 --w------- 1 root root 0 Apr 19 16:11 register -rw-r--r-- 1 root root 0 Apr 19 16:11 status I have never intentionally changed anything there, and the files were created by default or when I installed some other programs. $ cat /proc/sys/fs/binfmt_misc/cli enabled interpreter /usr/lib/binfmt-support/run-detectors flags: offset 0 magic 4d5a What kind of executable format is this? I googled "magic 4d5a" and found https://en.wikipedia.org/wiki/DOS_MZ_executable, but I am not sure how the file was created there since it is not a native executable format to Linux. Did installation of wine add it? $ cat /proc/sys/fs/binfmt_misc/jar enabled interpreter /usr/lib/jvm/java-9-oracle/lib/jexec flags: offset 0 magic 504b0304 Is the above for JVM bytecode format? $ cat /proc/sys/fs/binfmt_misc/python3.5 enabled interpreter /usr/bin/python3.5 flags: offset 0 magic 160d0d0a Is the above for Python bytecode or Python? $ cat /proc/sys/fs/binfmt_misc/status enabled $ cat /proc/sys/fs/binfmt_misc/register cat: /proc/sys/fs/binfmt_misc/register: Permission denied What is /proc/sys/fs/binfmt_misc/register used for? Does it also allow some executable format? Does ELF format need a file under /proc/sys/fs/binfmt_misc/? Thanks.
See How is Mono magical? for more background. /proc/sys/fs/binfmt_misc is a virtual file system managed by binfmt_misc (which is why the files are all 0-sized). cli is used for Windows and .NET executables (and really any MZ executable, as also used in DOS and OS/2); the detector it refers to determines whether a given binary should be run using Wine or Mono. jar provides support for JAR files, as used by Java programs. You can thus make a JAR executable, and run it directly (instead of using java -jar ...). The python files provide support for Python bytecode. status shows the overall status of binfmt_misc: in this case, it’s enabled. register allows new formats to be registered. This is done by echoing a string in a specific format (see the documentation for details) to register. The registered format will show up as a new file alongside cli, jar and the others. Many kinds of executable formats can be registered using binfmt_misc. They can be matched using a file extension (.jar etc., although JAR files are identified by their “PK” signature instead) or a magic value (“MZ” etc.), as long as the magic value occurs within the first 128 bytes. Beyond the files you’ve listed, other formats typically handled in this way are binaries for other architectures (“interpreted” by QEMU, or emulators such as Hatari), some interpreted game formats (the love game engine registers itself in this fashion under Debian at least)... Under Debian and derivatives, packages register binary formats using binfmt-support and files in /usr/share/binfmts/cli; dlocate -S /usr/share/binfmts/* will tell you which packages are adding binary formats. ELF doesn’t need any registration, it’s supported natively by the kernel.
What kinds of executable formats do the files under /proc/sys/fs/binfmt_misc/ allow?
1,659,734,788,000
I'm trying to figure how is the sticky bit used in NFS v3. RFC 1813 says on page 22: 0x00200 Save swapped text (not defined in POSIX). What do they mean by "swapped text"? In "NFS Illustrated", the author, Brent Callaghan, says it means not to cache. However, I haven't seen this explanation in other places.
The text section of an executable is the actual executable code, this is what it refers to. On Linux this request is ignored, it is just an optimisation, made by the admin. The kernel can do this for it self, without the prompt. It is saying that if the executable text gets swapped out, and the process ends, then keep it for next time. On linux (local)executables are not swapped out, as it is as quick to reload from file. Maybe it is a bit different for NFS. The sticky bit has other meanings for other file types: You described for executables. For directories, it stops non owners from deleting files. I assume that nfs is the same, when I used it 20 years ago it was. from: http://netbsd.gw.com/cgi-bin/man-cgi?sticky+7+NetBSD-current Later, on SunOS 4, the sticky bit got an additional meaning for files that had the bit set and were not executable: read and write operations from and to those files would go directly to the disk and bypass the buffer cache. This was typically used on swap files for NFS clients on an NFS server, so that swap I/O generated by the clients on the servers would not evict useful data from the server's buffer cache.
What does the "sticky bit" mean in NFS?
1,659,734,788,000
If one program, for example grep, is curretly running, and a user executes another instance, do the two instances share the read-only .text sections between them to save memory? Would the sharing of the main executable text sharing be done similarly to shared libraries? Is this behavior exhibited in Linux? If so, do other Unices do so as well? If this is not done in Linux, would any benefit come from implementing executables that often run multiple instances in parallel as shared libraries, with the invoked executable simply calling a main function in the library?
Unix shares executables, and shared libraries are called shared (duh...) because their in-memory images are shared between all users. I.e., if I run two instances of bash(1), and in one of them run, say, vim(1), I'll have one copy each of the bash and the vim executables in memory, and (as both programs use the C library) one copy of libc. But even better: Linux pages from the disk copies of the above executables/libraries (files). So what stays in memory is just those pages that have been used recently. So, code for rarely used vim commands or bash error handling, not used functions in libc, and so on just use up disk space, not memory.
Are .text sections shared between loaded ELF executables?
1,659,734,788,000
I imagine there's a environment variable or some setting I'm unaware of, but this is driving me nuts. baco:~ # ls -la /root/subversion-1.4.6/subversion/svnadmin/.libs/svnadmin -rwxr-x--- 1 root root 57263 Mar 10 2008 /root/subversion-1.4.6/subversion/svnadmin/.libs/svnadmin baco:~ # ls -la /usr/local/subversion-1.6.1/subversion/svnadmin/.libs/svnadmin -rwxr-xr-x 1 root root 76125 Apr 20 2009 /usr/local/subversion-1.6.1/subversion/svnadmin/.libs/svnadmin I have two versions of svnadmin compiled there. If I execute one I get it baco:~ # /usr/local/subversion-1.6.1/subversion/svnadmin/.libs/svnadmin --version svnadmin, version 1.6.1 (r37116) compiled Apr 20 2009, 16:09:36 Copyright (C) 2000-2009 CollabNet. Subversion is open source software, see http://subversion.tigris.org/ This product includes software developed by CollabNet (http://www.Collab.Net/). The following repository back-end (FS) modules are available: * fs_base : Module for working with a Berkeley DB repository. * fs_fs : Module for working with a plain file (FSFS) repository. If I execute the other, with full path, I still get the earlier! baco:~ # /root/subversion-1.4.6/subversion/svnadmin/.libs/svnadmin --version svnadmin, version 1.6.1 (r37116) compiled Apr 20 2009, 16:09:36 Copyright (C) 2000-2009 CollabNet. Subversion is open source software, see http://subversion.tigris.org/ This product includes software developed by CollabNet (http://www.Collab.Net/). The following repository back-end (FS) modules are available: * fs_base : Module for working with a Berkeley DB repository. * fs_fs : Module for working with a plain file (FSFS) repository. If I run svnadmin without path information, I also get the 1.6.1 version (normal, due to $PATH). Via cron I can get the 1.4.6 executed, so this has to be something particular to interactive or login shells. EDIT: I know that cron is executing the 1.4.6 because I've run /root/subversion-1.4.6/subversion/svnadmin/.libs/svnadmin --version via cron and I get output from a 1.4.6 version (with the proper compilation date). If I run the 1.6.1 version with full path via cron I do get 1.6.1's output. Both are binary files: baco:~ # file /root/subversion-1.4.6/subversion/svnadmin/.libs/svnadmin /root/subversion-1.4.6/subversion/svnadmin/.libs/svnadmin: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), for GNU/Linux 2.2.5, dynamically linked (uses shared libs), not stripped baco:~ # file /usr/local/subversion-1.6.1/subversion/svnadmin/.libs/svnadmin /usr/local/subversion-1.6.1/subversion/svnadmin/.libs/svnadmin: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), for GNU/Linux 2.2.5, dynamically linked (uses shared libs), not stripped They are not hard links either baco:~ # stat -c %h /usr/local/subversion-1.6.1/subversion/svnadmin/.libs/svnadmin 1 baco:~ # stat -c %h /root/subversion-1.4.6/subversion/svnadmin/.libs/svnadmin 1
Looks like the svnadmin binary is just a layer of code that wraps a shared library to do the actual work (including the version number). Indeed, if I run strings $(which svnadmin), the version message does not appear in the output, so it's not part of the svnadmin binary. So, a difference in the LD_LIBRARY_PATH between your interactive session and cron could explain the difference in behavior.
Bash executes a different file from the one prompted, even when providing full path
1,659,734,788,000
Can we determine inside the very script whether it started as source (.) or executable (shebang or something alike)?
Test on $0 if you have a script: #!/bin/bash echo $0 and make it executable (chmod 755 test.sh) and do: source test.sh you get bash (or something else depending on how you are logged in and what your shell is). If you do ./test.sh you get ./test.sh, so assuming that the script knows how it is saved on the disc you should do: if [ $(basename "$0") == "test.sh" ] then ..... your code here for non-sourced else ..... your code here for sourced fi
Script started as `source` or `executable`?
1,659,734,788,000
I am attempting to assemble the assembly source file below using the following NASM command: nasm -f elf -o test.o test.asm This completes without errors and I then try to link an executable with ld: ld -m elf_i386 -e main -o test test.o -lc This also appears to succeed and I then try to run the executable: $ ./test bash: ./test: No such file or directory Unfortunately, it doesn't seem to work. I tried running ldd on the executable: linux-gate.so.1 => (0xf777f000) libc.so.6 => /lib/i386-linux-gnu/libc.so.6 (0xf7598000) /usr/lib/libc.so.1 => /lib/ld-linux.so.2 (0xf7780000) I installed the lsb-core package and verified that /lib/ld-linux.so.2 exists. How come I still can't run the executable? I'm attempting to do this on a machine running the 64-bit edition of Ubuntu 15.04. The source code: ; This code has been generated by the 7Basic ; compiler <http://launchpad.net/7basic> extern printf extern scanf extern read extern strlen extern strcat extern strcpy extern strcmp extern malloc extern free ; Initialized data SECTION .data s_0 db "Hello, World!",0 printf_i: db "%d",10,0 printf_s: db "%s",10,0 printf_f: db "%f",10,0 scanf_i: db "%d",0 scanf_f: db "%lf",0 ; Uninitialized data SECTION .bss v_12 resb 4 v_0 resb 4 v_4 resb 8 SECTION .text ; Code global main main: finit push ebp mov ebp,esp push 0 pop eax mov [v_12], eax l_0: mov eax, [v_12] push eax push 5 pop edx pop eax cmp eax, edx jl l_2 push 0 jmp l_3 l_2: push 1 l_3: pop eax cmp eax, 0 je l_1 push s_0 push printf_s call printf add esp, 8 mov eax, [v_12] push eax push 1 pop edx pop eax add eax, edx push eax pop eax mov [v_12], eax jmp l_0 l_1: mov esp,ebp pop ebp mov eax,0 ret Here's the output of strings test: /usr/lib/libc.so.1 libc.so.6 strcpy printf strlen read malloc strcat scanf strcmp free GLIBC_2.0 t'hx Hello, World! .symtab .strtab .shstrtab .interp .hash .dynsym .dynstr .gnu.version .gnu.version_r .rel.plt .text .eh_frame .dynamic .got.plt .data .bss test.7b.out printf_i printf_s printf_f scanf_i scanf_f v_12 _DYNAMIC _GLOBAL_OFFSET_TABLE_ strcmp@@GLIBC_2.0 read@@GLIBC_2.0 printf@@GLIBC_2.0 free@@GLIBC_2.0 _edata strcat@@GLIBC_2.0 strcpy@@GLIBC_2.0 malloc@@GLIBC_2.0 scanf@@GLIBC_2.0 strlen@@GLIBC_2.0 _end __bss_start main
You need to also link start up fragments like crt1.o and others if you want to call libc functions. The linking process can be very complicated, so you'd better use gcc for that. On amd64 Ubuntu, you can: sudo apt-get install gcc-multilib gcc -m32 -o test test.o You can see files and commands for the link by adding -v option.
Unable to run an executable built with NASM
1,659,734,788,000
So for convenience, I store all my data on my Windows partition so that I can access my data easily from both Linux and Windows. However, I tried compiling a C++ program with g++, and found out that I cannot run the program with ./program_filename, as it tells me bash: program_filename: Permission denied Doing cp program_filename ~/program_filename and running it from my home directory works just fine, however. So I tried chmod +rwx program_filename, but ls -l shows that the permissions are still set as -rw-------. for all files in the directory. Nothing changes when I do this as root, either. Is there a simple fix for this? (In case it's useful, I am running Fedora 16 x64)
Make sure that your mount options allow the execute permission bit. There are mount options one can use to limit the permissions of files within the mounted filesystem: general noexec prevents all files from being executable, FAT-specific option showexec grants the permission only to files with extensions .exe, .com and .bat. Note also that noexec is implied by user and users. If you use user or users you can still get the execute permission bit working by mounting with explicitly specified exec mount option after the user or users option. See mount manpage for details.
Why can't I run programs on another partition in Linux?
1,659,734,788,000
I am trying to get npm to work. In the process, I seem to have two versions of it installed: A corrupt one installed in ~/bin, and another I just compiled and ran make install to put it in /usr/local/bin/npm. So, I moved the entire ~/bin folder into ~/old/bin ... but still when I run npm the system searches in ~/bin: $ which npm /usr/local/bin/npm $ alias npm -bash: alias: npm: not found $ npm -bash: /home/ubuntu/bin/npm: No such file or directory $ echo $PATH /home/ubuntu/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games $ What causes Ubuntu to look for npm in ~/bin/npm ? I'm using Ubuntu 11.10. I don't know if the question is ubuntu-specific or not, it can be moved to askubuntu if needed. I do see ~/bin in the PATH, but as far I as understand this only means that if npm was present at ~/bin then it would have used it ... but why does bash insist to look for it specifically there? Why doesn't it find /usr/local/bin/npm, even though the which command does find it?
The executable's previously known location is likely hashed by the shell. Resetting the shell's cache with hash -r should fix the issue. If you don't want to reset the entire cache, you can delete the individual entry for npm using hash -d npm.
What determines the path where the system searches for a file?
1,659,734,788,000
I'm developing a perl script which expected to be downloaded by Mac users with a very small knowledge of shell, linux etc, let's say office managers and accountants. After the downloading the script should be executed just by double-clicking via GUI. My goal is to make this as painless as possible to non-tech-savvy user. My doubts are: after the downloading the script won't have the executable bit if the perl executable is not at the default location then I should write something instead of #!/usr/bin/perl. What should I write there? Is there any other way except open a console and type perl ./script.pl ?
Or, you can have sh take care of it for you: #!/bin/sh exec perl -x "$0" "$@" #!/usr/bin/perl ... Yes, that's sh and Perl all in one file. From man perlrun: -x tells Perl that the program is embedded in a larger chunk of unrelated text, such as in a mail message. Leading garbage will be discarded until the first line that starts with "#!" and contains the string "perl". Any meaningful switches on that line will be applied. This approach only assumes the path of sh (which should be the same on any POSIX-compliant OS) and that a non-interactive instance of sh has perl somewhere in its PATH. As for ensuring the script has the executable bit set, you can always distribute it as a tarball and have your users "right click, extract here" from the GUI. If the tarball contained the script with the executable bit set, the extracted script should have the executable bit set.
run perl script with unknown perl location
1,659,734,788,000
I have this at the top of a script #!/usr/bin/env mocha as most people know, this tells the OS which executable to use to execute the script. However, my question is - how can we include more information to the mocha executable as to how to execute the script? mocha takes optional arguments, so I would like to do something like this: #!/usr/bin/env mocha --reporter=tap --output=foo but I don't think this is allowed. How can I give the mocha executable more information about how to run the file?
The shebang line is interpreted by the kernel and is not very flexible. On Linux, it's limited to a single argument: the syntax is #!, optional whitespace, path to the interpreter (not containing whitespace), optional whitespace, and optionally a single argument (which may contain whitespace except at the beginning). Furthermore the total size of the shebang line is limited to 128 bytes (BINPRM_BUF_SIZE constant in the kernel sources, used in load_script). If you want to pass more than one argument, you need a workaround. If you're using #!/usr/bin/env for path expansion, then there's only room for the command name and no other argument. The most obvious workaround is a wrapper script. Instead of having /path/to/my-script contain the mocha code, you put the mocha code in some other file /path/to/my-script.real and make /path/to/my-script a small shell script. Here's a sample wrapper that assumes that the real code is in a file with the same name as the script, plus .real at the end. #!/bin/sh exec mocha --reporter=tap --output=foo "$0.real" "$@" With a shell wrapper, you can take the opportunity to do more complex things such as define environment variables, look for available interpreter versions, etc. Using exec before the interpreter ensures that the mocha script will run in the same process as the shell wrapper. Without exec, depending on the shell, it might run as a subprocess, which matters e.g. if you want to send signals to the script. Sometimes the wrapper script and the actual code can be in the same file, if you manage to write a polyglot — a file that is valid code in two different languages. Writing polyglots is not always easy (or even possible) but it has the advantage of not having to manage and deploy two separate files. Here's a JavaScript/shell polyglot where the shell part executes the JS interpreter on the file (assuming that the JS interpreter ignores the shebang line, there isn't much you can do if it doesn't): #!/bin/sh ///bin/true; exec mocha --reporter=tap --output=foo "$0" "$@" … (the rest is the JS code) …
Include more instructions on how to run file in hashbang or elsewhere
1,659,734,788,000
I deleted the executable, so why is it still running? root@raspberrypi:/test# ls -la total 11096 drwxrwxrwx 2 pi pi 4096 Mar 12 18:26 . drwxrwxr-x 11 pi pi 4096 Feb 28 13:50 .. -rwxrw-rw- 1 pi pi 12149 Feb 28 13:00 .cproject -rwxrw-rw- 1 pi pi 3183 Mar 12 18:26 main.cpp -rwxrw-rw- 1 pi pi 2169 Feb 28 14:28 main.cpp~ -rwxrw-rw- 1 pi pi 1862 Feb 28 13:20 original.cpp -rwxrw-rw- 1 pi pi 984 Feb 28 13:09 .project -rwxrw-rw- 1 pi pi 11323309 Jan 28 12:54 teatro.png root@raspberrypi:/test# ./testedfb running... --update: Some time later (after I tried to compile again, but there's an error, and the executable was not created) the behavior changed to: root@raspberrypi:/test# ./testedfb bash: ./testedfb: No such file or directory But I'm still curious; why was it was running? --update (2): Just happened again, and I did the test: root@raspberrypi:/test# killall -9 testedfb testedfb: no process found root@raspberrypi:/test# ./testedfb running... And it still runs... I'm on a Raspberry Pi's Bash. --update (3): It just happened the oposite now, the file was there, but it was not found: root@raspberrypi:/test# ./testedfb bash: ./testedfb: No such file or directory root@raspberrypi:/test# ls main.cpp main.cpp~ original.cpp teatro.png testedfb (and on the next attempt it did run) Maybe it's the NFS (this folder is mounted) that is too slow to update? (thanks @derobert for asking about the filesystem) - this would answer this second case, not the first one, as I cannot see the executable, but still I can execute it... (and looks like the process is not running too)
Looks like it's a problem synchronizing the NFS folder. If I create the executable on the NFS server, it will only be visible/executed locally after an ls command. If I delete the executable on the NFS server, it still runs locally, event after an ls command showing that the file is not there. But if I delete the executable locally (on the same NFS folder) it'll not be found to execute again. (to clear: by 'locally' I mean on the client raspberry terminal, not on the NFS server machine)
Why does this executable still execute after I deleted it?
1,314,899,170,000
Is there a way to detect if an external command exists (i.e. wget, svn)? More specifically today, i was trying to run one of my scripts i wrote and the person didn't have wget or svn installed. The script just downloads a file extracts it or uses svn to export the trunk.
In Bash the type shell built-in gives information about the executable things: aliases, functions, executables. See help type for details. # just check for existence type -t 'yourfunction' > /dev/null || echo 'error: yourfunction not found' # explicitly check for given type [[ "$( type -t 'yourfunction' )" != 'function' ]] && \ echo 'error: yourfunction not found or is not a function'
How would one detect if external command exists in a script?
1,314,899,170,000
I am trying to port Android apps to Linux (don't laugh :) and I have come across a problem. When trying to execute an Android executable (app_process) after adding the executable permission with ./app_process it says it doesn't exist although cat ./app_process works. Also in my file manager (Pantheon Files) the executable shows the shared library icon. Is there any way to get these execute on Linux.
Android and Linux are two different operating systems. You can't just take an executable from one and run it on the other. The first hurdle is the kernel. Android and Linux are based on the same kernel, but they have a few different features. In particular, Android provides binders, which only exist in the mainstream kernel (the one found in Linux distributions) since version 3.19. A pure native-code application might not use binders but most Java apps do. The second hurdle is the dynamic libraries. If you have a dynamically-linked executable, it invokes the dynamic linker. Android and Linux have different dynamic linkers, and if the dynamic linker is not present, you get the same error as if the executable itself was not present. If you copy the dynamic linker, and the configuration files that it needs, and the native libraries, then you should be able to run most native programs. You'll need to copy most of /system, and the copy needs to be located at /system. If you want to run Java apps, it's more complicated. You need the Java runtime environment (Dalvik/ART), and most apps require some Android daemons as well (some native-code apps also require those demons). The upshot is that while the two systems can cohabit on one kernel, this needs to be a recent enough kernel, or an Android kernel (an Android kernel can run most Linux applications), and both operating systems need be installed — you can't just run an application from one on the other. I'm not aware of any ready-made installer for Android on top of Linux. There are installers for the other way round, however, in particular LinuxonAndroid. If the objective is to run an Android app on a Linux system, then the easiest way by far is to run it inside the emulator which is part of the Android development tools.
Why can't I execute Android x86 executables on Linux
1,314,899,170,000
Reading What do the brackets around processes mean? I understand that the executable name is printed. Linux ps man page: Sometimes the process args will be unavailable; when this happens, ps will instead print the executable name in brackets. However with ps -Awwo pid,comm,args I get: PID COMMAND COMMAND 1 init init [2] What does this mean? Does the "executable name" supposed to be init or [2]? I suppose the executable is of course init - what is [2]? Why is it printed? (Also, I don't really get why it can't show the full path if it knows the executable name.)
Both the comm column and the first word of the args column in the ps output show the name of the executable program if everybody involved follows the default convention. However it is possible to have discrepancies for various reasons. When a program starts, the command name as shown in the args column is chosen by the parent program that executes the program and passed as an argument (argv[0]). By convention, the parent chooses the base name of the executable (i.e. the path to the executable without the directory part), but this is not enforced. Once the program is running, it can overwrite that string. Init (at least the traditional Linux SysVinit) overwrites its argv[0] to indicate the current runlevel. On Linux, the comm column is initially filled in by the kernel to the first 16 characters of the base name of the executable. The process can change the content with the prctl system call. If the executable is renamed or deleted, neither the comm column nor the args column will reflect this. ps doesn't display the path to the executable, that's not in its job description. lsof can tell you with lsof -a -p 1 -d txt. On Linux, you can see this information in files in /proc/PID/: The process name (comm field) in in /proc/1/stat (second field in parentheses) and /proc/1/status (Name field). The path to the executable via /proc/1/exe. The arguments (starting with argv[0]) in /proc/1/cmdline (the arguments are separated by null bytes).
What does `init [2]` mean in the COMMAND column of ps?
1,314,899,170,000
$ uname -a Linux kali 4.3.0-kali1-amd64 #1 SMP Debian 4.3.3-5kali4 (2016-01-13) x86_64 GNU/Linux $ lsb_release -a No LSB modules are available. Distributor ID: Kali Description: Kali GNU/Linux Rolling Release: kali-rolling Codename: kali-rolling Recently, I download IDA Demo from hex-rays website. After downloading and extracting it, I move to the directory contents it. But when I run ./idaq command. I received: $ ./idaq bash: ./idaq: No such file or directory I tried to run this command $ file ./idaq ./idaq: ELF 32-bit LSB shared object, Intel 80386, version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux.so.2, for GNU/Linux 2.6.24, BuildID[ha1]=2b4f4a30e791c6fa175a4d44c868ea9ac8f9d7da, stripped Then I knew it is a 32-bit object file. After some Google search, I go to this page Getting "Not found" message when running a 32-bit binary on a 64-bit system, but these instructions don't help me anything. My question is how I can run it. P/s: My question is not elegent, if you don't like it, just press downvote.
Oh I think I must install gcc-multilib first by: sudo apt install gcc-multilib
bash: ./idaq: No such file or directory
1,314,899,170,000
Once while talking with my friend I was trying to joke that we might stand a better chance of completing our project if we just run loads of random programs and expect one of them to solve our problem. To demonstrate that, I wrote this "proof of concept": while true; do dd if=/dev/urandom of=pliczek count=1 chmod +x pliczek ./pliczek done To my horror, when I ran this loop and called ls, I noticed a lot of files with random-looking filenames in my current directory (tested on Fedora Linux on 64-bit x86). Now I can't stop wondering - what could actually have happened?
You are writing 512 bytes into a file and execute it. So the outcome could be anything a program with 512 bytes could possibly do. What that is depends on your machine. But 512 bytes are plenty of instructions, so basically everything could have happened like changing the root password, creating random files or generating a tar archive containing the source code for your project. An ELF header is not required. Simple ASCII text is sufficient and will be interpreted by the currently running shell (because of the missing shebang line). A greater than sign (>) redirects the output to a file. Therefore this particular byte is already sufficient to create files. Example: # this will create a file named abc123 in almost every shell :>abc123 # another variant >abc123^D This demonstrates that there are several ways to create files using a small amount of bytes which makes it more likely to happen.
What does this command actually cause?
1,314,899,170,000
I've recently purchased a usb stick which I will be using to share data between me and my colleagues. I'd like to format it as ext3, but I know this will cause trouble because for instance Mac OS X has troubles mounting that. The problem is that any other FS I've used before (except for ext2 or ext4) seems to screw up the executable bit on files that have been on it. E.g. I put up a normal non-executable pdf file on the stick, I take it off again and suddenly it's executable (e.i. the executable permission is enabled). I don't like these kinds of trickeries. What filesystem should I use? Or is this problem not FS-related?
According to Universal Disk Format - Wikipedia, UDF may work: it has POSIX-style permissions, is readable by Linux, Mac OS X, and Windows XP and up, and is writable by Linux, Mac OS X, and Windows Vista and up.
Most unix-like filesystem that can be mounted under windows and Mac OS X
1,314,899,170,000
I understand that there are two stages of execution in Linux, dealing with every command we run. I'll name this how I understand them because I don't know the original phrasing: Shell handling --- The shell edits the command (splitting it to different rows etc) and all of this is done in a different shell than the current one. Execution of the outcome after shell handling (in the original shell we work with). Can someone please answer with the name of these operations and reference to some reading material he finds best for new learners on this?
Shell handling --- The shell edits the command (splitting it to different rows etc) Yeah, sort of. The shell gets a command as a single string (usually one line of input), and turns it into a set of strings that actually go to the executable it eventually runs. The shell splits the whitespace separated words from a single string to multiple strings, but also handles quotes, and expands variables etc. So, something like ls "$options" "/filename with spaces" might result to the three strings ls, -l (from the value of $options), and /filename with spaces (quote removal). These get passed to the exec() system call that runs the program. and all of this is done in a different shell than the current one. No, not really. Some shell expansions (like $( ... )) spawn subshells to do the hard work, but that doesn't happen for a regular "simple" command line. Execution of the outcome after shell handling (in the original shell we work with). Actually executing the program as the after the command line is parsed is a logically separate step. But technically that happens in another process, since the way for running another program on Unix involves first calling fork(), which creates a new process as a copy of the first one, and then calling exec() to replace this copy (of the shell) with the actual program to run (say ls in the example). If the command is started with exec (as in exec ls, then the forking is skipped, and the shell replaces itself with command it's starting. Like mentioned in the comments, shell builtins (like echo in many shells) also often run in the same process, without forking. (All of the above is somewhat simplified. Real shells may have other features that are not described here.)
What are the main execution stages in Linux (how does a program basically gets executed in Linux)?
1,314,899,170,000
The description of the bzImage in Wikipedia is really confusing me. The above picture is from Wikipedia, but the line next to it is: The bzImage file is in a specific format: It contains concatenated bootsect.o + setup.o + misc.o + piggy.o. I can't find the others (misc.o and piggy.o) in the image. I would also like to get more clarity on these object files. The info on this post about why we can't boot a vmlinux file is also really confusing me. Another doubt is regarding the System.map. How is it linked to the bzImage? I know it contains the symbols of vmlinux before creating bzImage. But then at the time of booting, how does bzImage get attached to the System.map?
Till Linux 2.6.22, bzImage contained: bbootsect (bootsect.o): bsetup (setup.o) bvmlinux (head.o, misc.o, piggy.o) Linux 2.6.23 merged bbootsect and bsetup into one (header.o). At boot up, the kernel needs to initialize some sequences (see the header file above) which are only necessary to bring the system into a desired, usable state. At runtime, those sequences are not important anymore (so why include them into the running kernel?). System.map stands in relation with vmlinux, bzImage is just the compressed container, out of which vmlinux gets extracted at boot time (=> bzImage doesn't really care about System.map). Linux 2.5.39 intruduced CONFIG_KALLSYMS. If enabled, the kernel keeps it's own map of symbols (/proc/kallsyms). System.map is primary used by user space programs like klogd and ksymoops for debugging purposes. Where to put System.map depends on the user space programs which consults it. ksymoops tries to get the symbol map either from /proc/ksyms or /usr/src/linux/System.map. klogd searches in /boot/System.map, /System.map and /usr/src/linux/System.map. Removing /boot/System.map generated no problems on a Linux system with kernel 2.6.27.19 .
More doubts in bzImage
1,314,899,170,000
When I am building software from source on a GNU+Linux system, during the ./configure stage I frequently see the following line: checking for suffix of executables... How do I create such a check in a bash script? The reason I want to know this is that I want to create a makefile in which it compiles with suffix .exe on Cygwin, but no suffix on true GNU+Linux.
The test is done by compiling a small dummy C program and by checking how the compiler names the output file. The following example is a simplified version of what configure is doing #!/bin/sh cat << EOT > dummy.c int main(int argc, char ** argv) { return 0; } EOT gcc -o dummy dummy.c if [ -f dummy.exe ] ; then # exe fi I would suggest you to use autoconf to generate a configure script and use it for your purpose.
Find out extension of executable files?
1,314,899,170,000
I was working through my C programs, I am new to Linux/UNIX development and was having a look around. I created a simple C program of Hello world and was inspecting the compilation process. I tried to read the file header of the final executable and got the Output as this $ objdump -f my_output file format elf32-i386 architecture: i386, flags 0x00000112: EXEC_P, HAS_SYMS, D_PAGED start address 0x08048320** I understand the elf32-i386 part but I am not pretty sure with the other portions of the header. is D_PAGED somehow related to demand paging? and what does EXEC_P, HAS_SYSMS mean? is start address , the logical address of main() of the program?
The flags in the output are BFD - Binary File Descriptors. They're part of the binutils package, you can read what the flags mean if you look in the bfd header file /usr/include/bfd.h for their meaning or here. The reference to the "flags" 0x00000112 is what's called a flag field. It's binary and each bit represents a particular feature, a one means the flag is on, or set, and a zero means it's not. Also note that the "0x..." means it's a hexidecimal value so if you convert it from HEX to BIN: 0x00000112 = 0001 0001 0010 in binary. So the flags that correspond to the 2nd, 5th, and 9th bits in the flag field are set. Those are the flags that are being shown by name in the 3rd line of output from the objdump command. Meaning of Flags The 3 flags that your executable has are pretty standard. Read the bits from right to left! 1st bit - 0000 0000 0010 /* BFD is directly executable. */ #define EXEC_P 0x02 2nd bit - 0000 0001 0000 /* BFD has symbols. */ #define HAS_SYMS 0x10 3rd bit - 0001 0000 0000 /* BFD is dynamically paged (this is like an a.out ZMAGIC file) (the linker sets this by default, but clears it for -r or -n or -N). */ #define D_PAGED 0x100 So the take aways: this is an executable file it includes a symbol table if you want to debug it using Gnu Debugger, gdb, so the functions will have meaningful names the executable is dynamically linked to the standard libraries such as glibc etc. Start Address The last line, start address ..., is as you guessed it, where the actual .CODE starts for the executable.
Portions of the file Header
1,314,899,170,000
I have got the directory called Test and a few directories inside it. Both Test and the directories inside it have executable files. I'd like to print them with ls. I'd use this command. ls -l `find Test/ -perm /u=x,g=x,o=x -type f` Is this a good/right/quick command or not? My solution is: find Test/ -executable -type f -exec ls -l {} \; and got the same result as warl0ck and pradeepchhetri offered.
Not really, you can integrate the ls command with find, find Test/ -type f -perm /u=x,g=x,o=x -exec ls -l {} \; UPDATE Actually -executable is not an equivalent of -perm /u=x,g=x,o=x. You might have files that is executable only by the group or others, which will not be displayed. So, depends on your purpose, if you want files executable only by you, that's okay to use -executable.
Find executable files recursively
1,314,899,170,000
When a command is piped to another command in what way or in what format does the piped output exist/get sent? Is it a temporary file? Is it a string? And how does the command that receives the piped output decode/read that output? Example: echo "Someone string" | ./program | tail Does the program "program" receive the output of echo as a file? How would the program read in that input?
It goes as a bit stream. That's how it's sent and how it's received. The interfaces are file descriptors (or handles). A simple file is also handled that way. See: Standard streams on Wikipedia File descriptor on Wikipedia
In what format does piped output get sent and received from one command/program to another? [duplicate]
1,314,899,170,000
I am writing expect script which can be used on mulple operating systems but the problem is i can't use #!/usr/bin/expect evreywhere so instead i tried to do #!`which expect` at the top but it failed [sesiv@itseelm-lx4151 ~]$ ./sendcommand -bash: ./sendcommand: `which: bad interpreter: No such file or directory any solution for this.
One trick that mostly works (for perl, python, php interpreters, and probably others): #!/usr/bin/env expect I think env is always in /usr/bin/. A lot of interpreters can run that way now. Other hacks used to exist, but weren't understandable, or weren't all that portable.
Specifying a generic interpreter for a program like expect?
1,314,899,170,000
I am trying to run the statistics software Stata 11 on Ubuntu 11.10. as a regular user and I get the following error message: bash: xstata: Permission denied The user priviledges seem ok to me, tough: -rwxr-x--x 1 root root 16177752 2009-08-27 16:29 xstata* I would very much appreciate some advice on how to resolve this issue!
In the ls output you can see the file owner(root) and group(root). The user priviiledges apply to file owner (rwx), file group (r-x) and others (--x). Because you are not the root (and I suppose that you are not in the root group), only other (--x) applies to you. Thus you can run the file, but not read it. As a quick fix, try chmod +r xstata, this gives the read permission to all.
"Permission denied" when starting binary despite "rwx" priviledge
1,314,899,170,000
Well I'm just too green to Linux, but I'm stuck with a thing that I should know, and I don't. My file has the following permission bits sets: -r-xr-xr-x is owned by root ( but it should not matter since -x is active even for any user) it is not writable, and since it resides on a CDROM even if is a virtual iso mounted as a cdrom it sounds ok, but I can't execute: It says "Permission Denied" What I miss? The mount itself has execution permission, so it should execute, why it does not? EDIT I solved the issue, but not my doubt, since expliciting bash ./autorun.sh works - i need a root account anyway for what's inside, but it works.
The most likely explanation is Patrick's: the filesystem is mounted with the noexec option, so the execute permission bits on all files are ignored, and you cannot directly execute any program residing on this filesystem. Note that the noexec mount option is implied by the user option in /etc/fstab (supposedly for security reasons, even though unlike the nodev and nosuid options, noexec does not in fact provide any security). If you use user and want to have executable files, use user,exec. It's also possible that the shebang line of the script points to a file that exists but isn't executable — in that case, the error message confusingly refers to the script even though the error is with the interpreter. However it's unlikely that the shebang would point to a wrong existing file (if the error was “not found”, a dangling shebang would be more plausible).
Can't execute a file with execute permission bit set [duplicate]
1,314,899,170,000
Can an executable know where it is stored? For my open source project, I wrote a small bash script to automate some operations. The script lies at the root of the project and performs its tasks on all files below. People have the project files on their own computers, at their favorite path, so I can't just hardcode cd /home/nico/projects/theproject at the beginning of the script. I don't want to force people to manually cd to the project's directory everytime before executing either. I would like people to be able to use a Gnome shortcut to launch the script. To do so, the script needs to know where itself is stored.
#! /bin/bash echo I am located in $(dirname "$0") cd "$(dirname "$0")" Note that this may be a relative path.
Can an executable know where it is stored?
1,314,899,170,000
Surely this is an easy question, that I just don't know how to search for, but if I have two identically named executable files, one in /usr and one in /usr/local (for example), how can I change which one is executed by default without specifying the path, as in /usr/local/file, if $ which returns /usr? Centos6.4 btw.
It's not always as simple as "which comes first in $PATH;" see https://superuser.com/questions/358695/how-does-linux-decide-which-executable-im-trying-to-run. For a quick fix, set an alias in .bashrc (assuming you're using bash...) alias gorgonzola='usr/local/gorgonzola' Note: white space is not allowed around the "=" sign.
How to change which program is executed by default
1,314,899,170,000
Possible Duplicate: “No such file or directory” lies on Optware installed binaries I am currently trying to get some piece of hardware to work on an embedded device. One part of the driver is an executable, which has to be started to get the hardware to work. However when I try to execute it I get bash: no such file or directory I have checked and the file is definitely there, executable etc. I looked around online and I found that this could also be related to a problem between the architecture for which the file has been compiled and the one I am using. However I cannot find anything wrong. Here is some of the diagnostics I have run: root@desktop:~# /usr/local/eGTouchARMwithX/eGTouchD bash: /usr/local/eGTouchARMwithX/eGTouchD: No such file or directory root@desktop:~# ls -l /usr/local/eGTouchARMwithX/eGTouchD -rwxr-xr-x 1 root root 198870 Jul 19 14:11 /usr/local/eGTouchARMwithX/eGTouchD root@desktop:~# file /usr/local/eGTouchARMwithX/eGTouchD /usr/local/eGTouchARMwithX/eGTouchD: ELF 32-bit LSB executable, ARM, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.16, not stripped root@desktop:~# uname -a Linux desktop 3.2.0-1415-omap4 #20-Ubuntu SMP PREEMPT Mon Jun 18 19:03:59 UTC 2012 armv7l armv7l armv7l GNU/Linux root@desktop:~# file /bin/bash /bin/bash: ELF 32-bit LSB executable, ARM, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.31, BuildID[sha1]=0x1266f80916e3e59eb001459610510f5d05630297, stripped root@desktop:~# ldd /usr/local/eGTouchARMwithX/eGTouchD not a dynamic executable The only hint I have is that file reports this file as a dynamically linked executable, whereas ldd reports it as not being a dynamic executeable. Any ideas what could be wrong with the file and how to fix this?
Although the architecture is the same, and this message commonly indicates that the binary is for a foreign architecture, the real meaning of this message is that it's linked against an incompatible C runtime library. See this message on the Linux From Scratch website. In his case upgrading glibc gave him this error for several (but not all) commands. You'll need to install the proper C runtime library that matches the binary or get a binary that matches your C runtime library.
File not found for file which is there [duplicate]
1,314,899,170,000
I'm looking for a way to have an executable binary hardwired into a script. Something like this: #!/bin/bash ...some shell code execute binary: >>> binary code ... <<< ...some more shell code possibly I've found this solution, which uses uuencode and is good. But it depends on shrutils, which seem to be an extra, as they're not included by default on my Debian. I've been thinking of having the binary encoded with base64 and then decoding it and somehow executing, possibly without creating any temp files. I remember there was a library that's responsible for executing things, but forgot what it was. It might be the best to have a construct as simple as this execute: $ <(base64 out | base64 -d) bash: /dev/fd/63: Permission denied
How about: unpack() { tail +9 "$0" > /tmp/xxx.$$ chmod +x /tmp/xxx.$$ } unpack /tmp/xxx.$$ <add args here> rm /tmp/xxx.$$ exit <add the binary here> If you don't like to have binary data in the script, you may encode it and replace cat by the related decoder. Note that you need to replace the +9 by the line number where the binary starts in case that you modify the script to be of different length. If your tail implementation does not support the argument +9, try -n +9 instead. If you are in fear of clobbering an existing /tmp file, try to use mktemp(1) to create the tmp filename. Note that this method was used by the upgrade scripts for the SunPro compiler suite that included the compressed tar archive with the whole upgrade and some shell code to manage the handling around that.
Is there a way to embed an executable binary in a shell script without extra tools?
1,314,899,170,000
Please help me to find out how to limit number of program concurrent executions. I mean, particular program can be ran only, for example 5 times at once. I know how to limit proccess number for user, but how to do that for program, using PAM?
PAM is used to authorize logins and account modifications. It is not at all relevant to restricting a specific program. The only way to apply a limit to the number of times a program can be executed is to invoke it through a wrapper that applies this limit. Users can of course bypass this wrapper by having their own copy of the program; if you don't want that, don't give those users account on your machine. To restrict a program to a single instance, you can make it take an exclusive lock on a file. There's no straightforward way to use a file to allow a limited number of instances, but you can use 5 files to allow 5 instances, and make the wrapper script try each file in turn. Create a directory /var/lib/myapp/instances (or wherever you want to put it) and create 5 files in it, all world-readable but only writable by root. umask 022 mkdir /var/lib/myapp touch /var/lib/myapp/instances/{1,2,3,4,5} Wrapper script (replace myapp.original by the path to the original executable), using Linux's flock utility: #!/bin/sh for instance in /var/lib/myapp/instances/*; do flock -w 0 -E 128 "$instance" myapp.original "$@" ret=$? if [ "$ret" -ne 128 ]; then exit "$ret"; fi done echo >&2 "Maximum number of instances of myapp reached." exit 128
Limit number of program executions
1,314,899,170,000
As shown by the following code: ll total 136 -rwxr-xr-x 1 kaiyin kaiyin 19067 May 9 2013 dbmeister.py -rwxr-xr-x 1 kaiyin kaiyin 1617 Jul 29 2011 locuszoom -rwxr-xr-x 1 kaiyin kaiyin 112546 May 9 2013 locuszoom.R ./locuszoom -bash: ./locuszoom: Permission denied locuszoom is executable globally, but still can't be executed. The files are on a harddisk mounted at /media/data1.
The harddisk needs to be remounted so that exec mount option is included. excerpt from mount man page FILESYSTEM INDEPENDENT MOUNT OPTIONS .... exec Permit execution of binaries. You can do this 1 of 2 ways. Examples Via the command line. $ mount -o remount,exec /media/data1 Or in your /etc/fstab. # <file system> <dir> <type> <options> <dump> <pass> /dev/sdb1 /media/data1 ext4 rw,exec,noauto 0 0
File executable by all, yet still cannot be executed?
1,314,899,170,000
I have noticed that sometimes python scripts are not being started directly, ie /foo/bar.py, but rather from a shell script, which does nothing else than /usr/bin/python -O /foo/bar.py $@ One such example is wicd network manager. /usr/bin/wicd-gtk is a shell script, which starts the wicd-client.py: $ cat /usr/bin/wicd-gtk exec /usr/bin/python -O /usr/share/wicd/gtk/wicd-client.py $@ What is the purpose of this extra step? What would be the difference if I started /usr/share/wicd/gtk/wicd-client.py directly (provided it is executable) ?
You didn't post the full script — the script does other things before running wicd-client.py. It first ensures that a certain directory and a certain symbolic link exist: # check_firstrun() if [ ! -d "$HOME/.wicd" ]; then mkdir -p "$HOME/.wicd" fi # Make sure the user knows WHEREAREMYFILES ;-) if [ -e "/var/lib/wicd/WHEREAREMYFILES" ] && [ ! -L "$HOME/.wicd/WHEREAREMYFILES" ]; then ln -s "/var/lib/wicd/WHEREAREMYFILES" "$HOME/.wicd/WHEREAREMYFILES" fi Then it runs Python with the -O option, which causes it to optimize the bytecode. I don't know how useful that is. The wrapper script also forces /usr/bin/python to be used, whereas /usr/share/wicd/gtk/wicd-client.py starts with #!/usr/bin/env python, so it picks up whichever python comes first in the command search path. On most systems this won't make any difference. Note that there's a bug in this script: $@ should be "$@". The wrapper script will fail if any argument contains whitespace or wildcard characters \[*?. You could safely run /usr/share/wicd/gtk/wicd-client.py manually, as long as ~/.wicd exists. The Debian package doesn't make it executable, though; maybe other distributions do.
direct execution of python scripts
1,314,899,170,000
I'm running a 64bit kernel, already have CONFIG_IA32_EMULATION set, so do I still need CONFIG_IA32_AOUT enabled? From the help in menuconfig, I don't quite get it.
Short answer: If your system is a normal desktop/laptop and you don't run any really archaic software, you should be safe to disable CONFIG_IA32_AOUT. Keep CONFIG_IA32_EMULATION, as chances are that some of your binaries are still 32-bit. Explanation: There are two issues involved here: executable file formats and executing 32-bit code on a 64-bit system. You can read about file formats on wikipedia and have a look at their comparison, but the most important information for you is that ELF is the current standard and a.out is its predecessor. It is very unlikely that you'll find any recent program in a form of an a.out binary (don't mistake file format with the default output name that compilers give to binaries - the latter typically is still a.out for historical reasons, in spite of the binaries being in ELF format). If you have a 64-bit system, chances are that some of your programs are still 32-bit. This is much more probable than coming across an a.out binary. To make it clear: binaries in both ELF and a.out format can be both 32- and 64-bit. These distinctions are separate (as you can see from the comparison).
What does CONFIG_IA32_AOUT do actually?
1,314,899,170,000
I was going through the questions of this site. In this particular question, I see the command lsof being used to list the files that are open for a particular user. I ran the below command in my terminal. lsof -a -u root -d txt I am seeing a long output which are completely irrelevant (at least to me). I am finding it hard to understand the output produced. This is mapping-d 3992 root txt REG 8,5 29728 7169187 /usr/libexec/mapping-daemon clock-app 4005 root txt REG 8,5 88048 7169216 /usr/libexec/clock-applet mixer_app 4007 root txt REG 8,5 53484 7169197 /usr/libexec/mixer_applet2 gnome-scr 4078 root txt REG 8,5 168628 1337742 /usr/bin/gnome-screensaver notificat 4081 root txt REG 8,5 34732 1324168 /usr/libexec/notification-daemon gnome-ter 4219 root txt REG 8,5 309400 1318348 /usr/bin/gnome-terminal gnome-pty 4221 root txt REG 8,5 12384 1899888 /usr/lib/vte/gnome-pty-helper bash 4222 root txt REG 8,5 735932 196459 /bin/bash firefox 15551 root txt REG 8,5 121288 2424613 /usr/lib/firefox/firefox npviewer. 15790 root txt REG 8,5 174364 1996912 /usr/lib/nspluginwrapper/npviewer.bin bash 15908 root txt REG 8,5 735932 196459 /bin/bash lsof 16014 root txt REG 8,5 129820 1323233 /usr/sbin/lsof lsof 16015 root txt REG 8,5 129820 1323233 /usr/sbin/lsof I was expecting, if I had opened a txt file as the root user, only that file's information would be displayed, if I run the lsof command. Can someone please help me in understanding what this command actually produces as the output?
lsof lists all files that are accessed by a program one way or another. The fourth column (FD) describes the way in which the program is accessing the file. Here are some common cases (there are others): A number: the file is opened by the process, and the number is the file descriptor. Letters after the file name indicate the opening mode (e.g. r for read-only, w for write-only, u for both). cwd: the file is the process's current working directory. txt: the file is the process's executable. mem: the file is mapped to the process's virtual memory space. The descriptor type txt has nothing to do with “text files” in the sense of containing human-readable text or of having a name ending with .txt. Here “text” is an odd bit of terminology refering to executable code, as in the text segment of an executable file which is the section that contain the code. This strange name comes from a now-defunct programming community which predates Unix (General Electric, whose other naming legacy in the Unix world is the “GECOS field”). Thus what you're seeing is the main executable of each process.
lsof - debug the output information
1,314,899,170,000
I see that there are other questions like this out there, but the answers there did not work for me. I am using I downloaded the Julia 1.9.2 (Linux, x86-64, glibc) prebuilt binary and tried to execute the binary, but I get the following error. bash: ./julia: cannot execute: required file not found I am able to execute all other binaries on my machine. My understanding is that this is because bash cannot find the interpreter required for executing this file. In this case it should be a loader, I guess? Here's the output of file, $ file julia julia: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.4.0, with debug_info, not stripped and ldd $ ldd julia linux-vdso.so.1 (0x00007ffdae587000) libdl.so.2 => /nix/store/46m4xx889wlhsdj72j38fnlyyvvvvbyb-glibc-2.37-8/lib/libdl.so.2 (0x00007fe5955f4000) libpthread.so.0 => /nix/store/46m4xx889wlhsdj72j38fnlyyvvvvbyb-glibc-2.37-8/lib/libpthread.so.0 (0x00007fe5955ef000) libc.so.6 => /nix/store/46m4xx889wlhsdj72j38fnlyyvvvvbyb-glibc-2.37-8/lib/libc.so.6 (0x00007fe595409000) libjulia.so.1 => /home/chaitanyak/Downloads/julia-1.9.2/bin/./../lib/libjulia.so.1 (0x00007fe5953e6000) /lib64/ld-linux-x86-64.so.2 => /nix/store/46m4xx889wlhsdj72j38fnlyyvvvvbyb-glibc-2.37-8/lib64/ld-linux-x86-64.so.2 (0x00007fe5955fb000) I am using NixOS 23.05.2573.61676e4dcfee (Stoat) x86_64. Verbose output of ldd $ ldd -v julia linux-vdso.so.1 (0x00007ffcd2942000) libdl.so.2 => /nix/store/46m4xx889wlhsdj72j38fnlyyvvvvbyb-glibc-2.37-8/lib/libdl.so.2 (0x00007efd29f12000) libpthread.so.0 => /nix/store/46m4xx889wlhsdj72j38fnlyyvvvvbyb-glibc-2.37-8/lib/libpthread.so.0 (0x00007efd29f0d000) libc.so.6 => /nix/store/46m4xx889wlhsdj72j38fnlyyvvvvbyb-glibc-2.37-8/lib/libc.so.6 (0x00007efd29d27000) libjulia.so.1 => /home/chaitanyak/Downloads/julia-1.9.2/bin/./../lib/libjulia.so.1 (0x00007efd29d04000) /lib64/ld-linux-x86-64.so.2 => /nix/store/46m4xx889wlhsdj72j38fnlyyvvvvbyb-glibc-2.37-8/lib64/ld-linux-x86-64.so.2 (0x00007efd29f19000) Version information: ./julia: libc.so.6 (GLIBC_2.2.5) => /nix/store/46m4xx889wlhsdj72j38fnlyyvvvvbyb-glibc-2.37-8/lib/libc.so.6 /nix/store/46m4xx889wlhsdj72j38fnlyyvvvvbyb-glibc-2.37-8/lib/libdl.so.2: libc.so.6 (GLIBC_ABI_DT_RELR) => /nix/store/46m4xx889wlhsdj72j38fnlyyvvvvbyb-glibc-2.37-8/lib/libc.so.6 libc.so.6 (GLIBC_2.2.5) => /nix/store/46m4xx889wlhsdj72j38fnlyyvvvvbyb-glibc-2.37-8/lib/libc.so.6 /nix/store/46m4xx889wlhsdj72j38fnlyyvvvvbyb-glibc-2.37-8/lib/libpthread.so.0: libc.so.6 (GLIBC_ABI_DT_RELR) => /nix/store/46m4xx889wlhsdj72j38fnlyyvvvvbyb-glibc-2.37-8/lib/libc.so.6 libc.so.6 (GLIBC_2.2.5) => /nix/store/46m4xx889wlhsdj72j38fnlyyvvvvbyb-glibc-2.37-8/lib/libc.so.6 /nix/store/46m4xx889wlhsdj72j38fnlyyvvvvbyb-glibc-2.37-8/lib/libc.so.6: ld-linux-x86-64.so.2 (GLIBC_2.2.5) => /nix/store/46m4xx889wlhsdj72j38fnlyyvvvvbyb-glibc-2.37-8/lib64/ld-linux-x86-64.so.2 ld-linux-x86-64.so.2 (GLIBC_2.3) => /nix/store/46m4xx889wlhsdj72j38fnlyyvvvvbyb-glibc-2.37-8/lib64/ld-linux-x86-64.so.2 ld-linux-x86-64.so.2 (GLIBC_PRIVATE) => /nix/store/46m4xx889wlhsdj72j38fnlyyvvvvbyb-glibc-2.37-8/lib64/ld-linux-x86-64.so.2 /home/chaitanyak/Downloads/julia-1.9.2/bin/./../lib/libjulia.so.1: libdl.so.2 (GLIBC_2.3.3) => /nix/store/46m4xx889wlhsdj72j38fnlyyvvvvbyb-glibc-2.37-8/lib/libdl.so.2 libdl.so.2 (GLIBC_2.2.5) => /nix/store/46m4xx889wlhsdj72j38fnlyyvvvvbyb-glibc-2.37-8/lib/libdl.so.2 libpthread.so.0 (GLIBC_2.2.5) => /nix/store/46m4xx889wlhsdj72j38fnlyyvvvvbyb-glibc-2.37-8/lib/libpthread.so.0 libc.so.6 (GLIBC_2.2.5) => /nix/store/46m4xx889wlhsdj72j38fnlyyvvvvbyb-glibc-2.37-8/lib/libc.so.6 Edit I also tried using the possible interpreters directly. $ /nix/store/46m4xx889wlhsdj72j38fnlyyvvvvbyb-glibc-2.37-8/lib64/ld-linux-x86-64.so.2 julia julia: error while loading shared libraries: julia: cannot open shared object file: No such file or directory $ /nix/store/46m4xx889wlhsdj72j38fnlyyvvvvbyb-glibc-2.37-8/lib/ld-linux-x86-64.so.2 julia julia: error while loading shared libraries: julia: cannot open shared object file: No such file or directory So, it looks like that it cannot open some .so file, but does not provide its name. Edit 2 I ran the interpreters under gdb and it gives some more information. (gdb) r Downloads/julia-1.9.2/bin/julia Starting program: /nix/store/46m4xx889wlhsdj72j38fnlyyvvvvbyb-glibc-2.37-8/lib/ld-linux-x86-64.so.2 Downloads/julia-1.9.2/bin/julia [Thread debugging using libthread_db enabled] Using host libthread_db library "/nix/store/46m4xx889wlhsdj72j38fnlyyvvvvbyb-glibc-2.37-8/lib/libthread_db.so.1". [Detaching after fork from child process 29527] [New Thread 0x7ffff17ff6c0 (LWP 29528)] ERROR: could not load library "/nix/store/46m4xx889wlhsdj72j38fnlyyvvvvbyb-glibc-2.37-8/lib/../lib/julia/sys.so" /nix/store/46m4xx889wlhsdj72j38fnlyyvvvvbyb-glibc-2.37-8/lib/../lib/julia/sys.so: cannot open shared object file: No such file or directory [Thread 0x7ffff7dae0c0 (LWP 29524) exited] [Thread 0x7ffff17ff6c0 (LWP 29528) exited] [New process 29524] [Inferior 1 (process 29524) exited with code 01] (gdb) So, it seems that it is trying to locate sys.so that ships with Julia inside the nix store. So, is this a nix-specific problem? Is the only workaround to create a nix package for Julia 1.9? Edit 3 So, Julia ships with its own shared object files, and instead of searching for them there, the binary tries to search for them relative to some other location in my nix store. Is modifying RPATH of the julia binary using patchelf a solution to this?
Julia 1.9 is already available in nixpkgs. For some reason Julia 1.8 is the top result, which is why I might have missed it. As such this particular issue falls under the general category of executing non-NixOS binaries on NixOS and has already been covered the Unix stack exchange.
Cannot exceute binary: required file not found
1,314,899,170,000
I use Vim a lot, and I know how I can start vim in insert mode. So I have an alias named vii in my .bash_aliases file. On other hand I use Git a lot too, and I have this line in my .gitconfig: [core] editor = vi To write a commit message the vi editor is opened every time and I have to go in insert mode. So I thought of replacing vi with vii, and did. But the problem is when I do git commit, instead of opening the vim in insert mode, it gives this error: error: cannot run vii: No such file or directory error: There was a problem with the editor 'vii'. Please supply the message using either -m or -F option. This makes clear that git does not looks to .bash_aliases file, even it isn't related to bash in any way. It does directly looks if there is /usr/bin/vii or not. And executes it if it is. The Question Can I place the aliased version of vi as vii in /usr/bin/? (and please don't suggest me to use git commit -m "<commit message>". There are other situation where I need vim in insert mode.)
Aliases are internal to each of your current shell environments - they are expanded by the currently running shell (bash in your case), so they only have effect on what you execute by typing/pasting in your terminal. You have at least two options here: create a wrapper script named vii that will execute vim -c 'startinsert' and put it preferably in /usr/local/bin/ (or $HOME/bin, if it exists and is in your search path). The script only needs to contain #!/bin/sh1 exec vim -c 'startinsert' "$@" 2 (Make sure to make it executable by running chmod +x /usr/local/bin/vii.) Depending on the PATH configuration of your git/other programs, you may need to specify full path to that wrapper script (i.e., editor = /usr/local/bin/vii). If it is ok for you to have vim always start in insert mode, configure it to do so by adding startinsert at the end of .vimrc. 1   You can write the "she-bang" line as #!/bin/bash, but there's no need to in a script that contains no bashisms. 2   $@ must be in double quotes in case the script is ever called with argument(s) that contain space(s). startinsert does not need to be quoted (but it doesn't hurt).
place the aliased version of an existing command in /usr/bin/
1,314,899,170,000
I watched a video lecture today that introduced C and things like how to make a C program that will run in Linux. I followed the steps given and now I'm stuck with a bit of a problem. I created my C file (HelloWorld.c) and used the command gcc -o HelloWorld HelloWorld.c to compile the file, both of these steps were successful. Afterwards I checked to make sure that HelloWorld had been created by using the command ls, and it had been. However, when I use the command HelloWorld, which is supposed to run the program, I get an error that says HelloWorld: command not found. In the video lecture the professor did mention that this worked for 32-bit systems and I'm using a 64-bit system. Perhaps this could be the problem? EDIT: Also in the video lecture the professor mentioned that when I use the command ls I should see HelloWorld*. I see only HelloWorld (without the star).
You don't have the value of the PATH environment variable set to include whatever directory the HelloWorld executable file lives in. Supposing you have used cd to get to the directory, you can run HelloWorld with this command: ./HelloWorld Unix shells have a variable called PATH, which is a :-delimited list of directories in which to look when the user issues a command without a fully-qualified path name (/usr/bin/ls is fully qualified: it starts at / and ends at ls, but ls is not fully-qualified by itself). If you don't have an entry of . in PATH, you have to explicitly use ./ on the beginning of a command to get the file of that name in the current directory to execute.
Running C Programs on Linux
1,314,899,170,000
I use Ubuntu 16.04 and I have a localization file, a file that executes many files, that I got after downloading my own GitHub project to my machine. This file contains a Bash script, and is named localize.sh. I run it in a subsession via ~/${repo}/localize.sh. The file contains many lines, all with the same basic pattern (see below), to execute all relevant files in sub-session. This is the actual content of that file: #!/bin/bash ~/${repo}/apt/security.sh ~/${repo}/apt/lemp.sh ~/${repo}/apt/certbot.sh ~/${repo}/apt/misc.sh ~/${repo}/third-party/pma.sh ~/${repo}/third-party/wp-cli.sh ~/${repo}/conf/nginx.sh ~/${repo}/conf/php.sh ~/${repo}/conf/crontab.sh ~/${repo}/local/tdm.sh One can notice the repetitive ~/${repo}/ pattern. It isn't a big problem, but it would still be good to reduce these redundant characters as this file should become larger. What is the most minimal way possible to achieve a DRY (Don’t Repeat Yourself version of that code? One single long line isn't something I personally would want to use, in this case. Edit: By principle, there aren't and there shouldn't be any other files in the listed directories, besides the files listed in localize.sh. Also, it might be that the name localize.sh, as well as calling the action of the file localization is a bad approach; Please criticize me if you think it's bad, in a side note.
Based on the Answer by Marc, I assume the shortest solution would be something like this: $ myPath="$HOME/$repo"; find ${path} -type f -iname "*.sh" -exec {} \; file_1 file_2 ...
How to execute many files in the same directory in a minimal, DRY, pretty way?
1,314,899,170,000
I am working on a Linux server, and I running different jobs on different node. However, when compiling my program, I didn't set their specific name, so they are all a.out Now I found one of the running a.out, may be not right, and want to terminated. But the Top command doesn't show the path to the executable. How to do it?
You can use lsof (available for just about any Unix variant, but often not part of the default installation) to list all the files a process is using. “Using” includes open file descriptors as well as closely related concepts such as which executable the process is running. The executable has txt in the FD column, for obscure historical reasons. $ lsof -p1234 | grep txt a.out 1234 user15964 txt REG 253,0 34567 /path/to/a.out (made-up output) On Solaris and Linux, there's a more direct way: the proc filesystem provides information about each process, including which executable it's running. (On Linux at least, that's where lsof gets its information.) $ ls -l /proc/1234/exe lrwxrwxrwx 1 root root 0 Feb 30 34:56 /proc/1234/exe -> /path/to/a.out If you're looking for a process running a given executable, run fuser. $ fuser /path/to/a.out /path/to/a.out: 1234e 1239e
How to know the path of a running executable?
1,494,218,857,000
I have made a simple addition program in C in both OS, Linux (Ubuntu and CentOS) and Windows 7 with the same source code, which is as follows: #include <stdio.h> int main(){ int a,s,d; printf("type the values u want to add and give tab between them\n"); scanf("%d %d",&a,&s); d=a+s; printf("addition is %d",d); return 0; system("read -p 'Press Enter to EXIT...' var"); } In Windows it runs when I double-click on addition.exe but in Ubuntu (also in CentOS) when I click on the executable file addition, nothing happens. It does not run or open a terminal. However, it runs when I type ./addition in a terminal. But I want to run it by double-clicking on it. What should I do? The properties of that files are in this image: Also there is no option like "open in terminal" in the open with section of properties. I also tried creating .desktop file which is as follows: [Desktop Entry] Name=addition Type=Application Exec=/media/smit/D/smits programs of c/projects by code blocks/02U/addition/bin/Debug/addition Terminal=true When I click on addition.desktop then it says an error occur while launching application. I also tried to open it by copying this desktop file to /usr/share/applications.
The core of the issue is that you're trying to run the program, which is console application, but you don't have terminal attached to it. In terminal you can run your program by just calling the program name, but in GUI you need to specify explicitly that there should be a terminal window raised to run console apps (this is particularly true of GNOME-based desktops, such as Ubuntu's Unity). What should be done is that you also need to create a .desktop file for your program with 4 fields. Here's an example: [Desktop Entry] Name=MyProg Type=Application Exec=/home/xieerqi/example_directory/hello_world_prog Terminal=true I don't know about CentOS, but as far as Ubuntu goes, it's the requirement that .desktop applications must be made executable also if they are located in any directory under user's home directory. .desktop files that live in other directories, such as /usr/share/applications do not require that. So, once you have the .desktop file in place, and made it executable, you will be able to run the program. The important bit is Terminal=true line. That's what will tell GUI to raise a terminal and run your program there. NOTE: if your program executes stuff and exits immediately, you will need to have some sort of delay or getchar(); call just to keep the window open, because terminal window will exit when program exits. That's why many users are sometimes confused "Why is my program not running?" It runs, in reality, it just exits too fast. Additional notes: get rid of system("read -p 'Press Enter to EXIT...' var"); . The read call is a shell built in, and is not a standalone program, which means it only can be used if you use a shell, such as bash. If you are using C, do it properly using scanf() or getchar() to add delay to your program. It is present after return 0; line which means your system() line won't be reached (the program will just quit at return statement), so your placement of the pause for the program is also invalid. Read this post on AskUbuntu for example of proper .desktop file with links to official documentation. If you are feeling lazy and don't want to make .desktop files for each and every executable file, there's plenty of solutions here. I even posted a script there as well.
a program made in c runs on double click in windows but not in linux
1,494,218,857,000
The job of my Unix executable file is to perform a long computation, and I added a interrupt/resume functionality to it as explained below. At regular intervals, the program writes all relevant data found so far in a checkpoint file, which can then be used as a starting point for a "resume" operation. To interrupt the program, I use Ctrl+C. The only problem with this methodology is that, if the interruption occurs when the program is writing into the file, I am left with a useless half written file. The only fix I could find so far is as follows: make the program write into two files, so that at restart time one of them will be readable. Is there a cleaner, better way to create an "interruptable" Unix executable ?
It depends a bit on if you care only about the program itself crashing, or the whole system crashing. In the first case, you could write the fresh data to a new file, and then rename that to the real name only after you're done writing. That way the file will contain either the previous, or the new checkpoint data, but never only partial information. Though partial writes should be rare enough in any case, if we assume the checkpointing code itself is not likely to fail, and if relevant signals are trapped to make sure the program saves a new checkpoint in full before exiting. (In addition to SIGINT, I think you'd better catch SIGHUP and SIGTERM too.) If we consider the possibility of the whole system crashing, then I wouldn't trust only one checkpoint file. The data is not likely to actually be on the disk when system returns from the file write system call. Instead, the OS and the disk itself are likely to cache the data and actually write it some time later. So leaving one or two previous checkpoints would work as a failsafe against that.
Best way to create "interruptable" executable
1,494,218,857,000
How do I use file to differentiate between ELFves and scripts as quickly as possible? I don't need any further details, just ELF, script (/plaintext), or other/error.
If it's just between ELF and script, you may not need file at all. With bash: IFS= LC_ALL=C read -rn4 -d '' x < file case $x in ($'\x7fELF') echo ELF;; ("#!"*) echo script;; (*) echo other;; esac (-d '' (to use NUL character as delimiter) is to work around the fact that bash's read otherwise just ignores the NUL bytes in the input). See also: Searching for 32-bit ELF file Fastest way to determine if shebang is present
Differentiate between ELFves and scripts quickly
1,494,218,857,000
I have a source tree which, when make is run, produces several executables named "001", "002", and etc. I'm trying to write a script which will find all of these executables in my source tree, and then execute them. I have this so far: find build/ -type f -executable | ack --nocolor "\d{3}$" Which lists the executables that I want to execute correctly. My question is, how do I then run all of them? I thought perhaps some combination of xargs and exec would do it, but exec seems to try replacing the current shell with the command, which isn't what I want.
Try: $ find build/ -type f -executable | ack --nocolor "\d{3}$" | while read prog do "$prog" done
Execute all files in a list
1,494,218,857,000
I've installed hddtemp on my Arch Linux, but it needs to be run with root permissions. I want to execute it as a normal user without using sudo. How can I do this?
It's possible to assign users in a group permission to run an executable by using the /etc/sudoers mechanism. For instance, to permit all users in the users group to run hddtemp with root permissions run visudo as root and add: %users ALL = (root) NOPASSWD: /path/to/hddtemp
Make a programme executable by common users
1,494,218,857,000
This is more of a general question I have been curious about but put simply: how does bash execute commands given to it via a script or terminal? It would be possible, I guess, to have a bunch of if statements checking all commands like so (Pseudocode): if (command == "pwd") pwd(); else if (command == "echo") echo(); ... But this would create problems as you would have to recompile the code every time you add a new command, like one started for a program like firefox or gedit. Then I remembered the which command, which (no pun intended) points to the directory of a given command, making me assume that bash simply looks for a file and grabs it with an iostream to execute it. Is this the case, and if so, how does it know what method to call, or are they simply generic executables?
Basically, some commands are built in to the bash shell program itself (e.g. echo, set), in which case, bash already has the code compiled into it to run those commands internally, in response to a user calling them from the command line. If you look at the manual in man bash or info bash, it has a list of the 'builtins'. If a command is not found in the builtins, then the shell searches the directories listed in the $PATH environment variable (in the order listed), to see if it can find an external command there. If not, then it will report an error that the command can't be found.
How does bash execute commands
1,494,218,857,000
I have (foolishly?) written a couple of moderately general-purpose xslt scripts. I'd quite like to turn these into executables that read an xml document from standard in or similar. The way you do this with other languages is to use a shbang. Is there an easy / standard way to do this with xsltproc and friends? Sure I could hack up a wrapper around xsltproc that pulls off the first comment line... but if there is something approximating a standard this would be nicer to use.
You could use the generic binfmt-misc kernel module that handles which interpreter is used when an executable file is run. It is typically used to allow you to run foreign architecture files without needing to prefix them with qemu or wine, but can be used to recognise any magic characters sequence in a file header, or even a given filename extension, like *.xslt. See the kernel documentation. As an example, if you have a file demo.xslt that starts with the characters <xsl:stylesheet version=... you can ask the module to recognise the string <xsl:stylesheet at offset 0 in the file and run /usr/bin/xsltproc by doing as root colon=$(printf '\\x%02x' \':) # \x3a echo ":myxsltscript:M::<xsl${colon}stylesheet::/usr/bin/xsltproc:" >/etc/binfmt.d/myxslt.conf cat /etc/binfmt.d/myxslt.conf >/proc/sys/fs/binfmt_misc/register You don't need to go via the /etc file unless you want the setting to be preserved over a reboot. If you don't have the /proc file, you will need to mount it first: mount binfmt_misc -t binfmt_misc /proc/sys/fs/binfmt_misc Now, if you chmod +x demo.xslt you can run demo.xslt with any args and it will run xsltproc with the filename demo.xslt provided as an extra first argument. To undo the setup, use echo -1 >/proc/sys/fs/binfmt_misc/myxsltscript
xslt shbang: Using xslt from the command line
1,494,218,857,000
Here is what I do tree -R /Applications/NetBeans/NetBeans\ 7.0.app/ | perl -e 'while (<>) { print if /java$/ }' but of course this doesn't return the result I want. What I want is to display an executable java file along with its recursive directory so that I would know where that java file is. Something like this structure below `-- Contents `-- Resources `-- NetBeans `-- ExecutableJavaEnv `-- java This question is inspired from my question on SU. The structure display above actually is not really important because I'm not sure if I can get what I want using find. What I need is just get the path so that I can set my TextMate to run NetBeans' Java instead of /usr/bin/java from my Mac OS X 10.5.8. Advice? Help? 1st Edit: Thanks for answers so far, I appreciate it. Here is the result of the command I tried: . find /Applications/NetBeans/NetBeans\ 7.0.app/ -name java /Applications/NetBeans/NetBeans 7.0.app//Contents/Resources/NetBeans/java . ll /Applications/NetBeans/NetBeans\ 7.0.app/Contents/Resources/NetBeans/ total 632 -rw-rw-r-- 1 arie admin 6.6K Apr 8 09:29 CREDITS.html -rw-rw-r-- 1 arie admin 1.7K Apr 8 09:29 DISTRIBUTION.txt -rw-rw-r-- 1 arie admin 2.1K Apr 8 09:30 LEGALNOTICE.txt -rw-rw-r-- 1 arie admin 78K Apr 8 09:30 LICENSE.txt -rw-rw-r-- 1 arie admin 5.4K Apr 8 09:30 README.html -rw-rw-r-- 1 arie admin 158K Apr 8 09:30 THIRDPARTYLICENSE.txt drwxrwxr-x 8 arie admin 272B Apr 8 09:29 apisupport/ drwxrwxr-x 3 arie admin 102B Apr 8 10:32 bin/ drwxrwxr-x 9 arie admin 306B Jul 1 15:56 cnd/ drwxrwxr-x 9 arie admin 306B Apr 8 09:29 dlight/ drwxrwxr-x 9 arie admin 306B Apr 8 09:30 enterprise/ drwxrwxr-x 6 arie admin 204B Apr 8 09:30 ergonomics/ drwxrwxr-x 6 arie admin 204B Jun 11 22:17 etc/ drwxrwxr-x 7 arie admin 238B Apr 8 09:30 groovy/ drwxrwxr-x 21 arie admin 714B Jun 11 22:15 harness/ drwxrwxr-x 11 arie admin 374B Jun 11 22:25 ide/ drwxrwxr-x 12 arie admin 408B Jul 1 15:56 java/ drwxrwxr-x 10 arie admin 340B Apr 8 10:15 mobility/ -rw-rw-r-- 1 arie admin 33K Apr 8 09:30 moduleCluster.properties drwxrwxr-x 15 arie admin 510B Jun 11 22:17 nb/ -rw-rw-r-- 1 arie admin 15K Apr 8 09:30 netbeans.css drwxrwxr-x 11 arie admin 374B Apr 8 09:30 php/ drwxrwxr-x 11 arie admin 374B Jun 11 22:25 platform/ drwxrwxr-x 10 arie admin 340B Apr 8 09:30 profiler/ drwxrwxr-x 3 arie admin 102B Apr 8 08:43 ruby/ drwxrwxr-x 7 arie admin 238B Apr 8 09:30 websvccommon/ . And for another answer is this . find /Applications/NetBeans/NetBeans\ 7.0.app -type f -executable -name java find: -executable: unknown option
To find executable files called java under the specified directory: find '/Applications/NetBeans/NetBeans 7.0.app/' -name java -type f -perm -u+x The output will be one file name per line, e.g. /Applications/NetBeans/NetBeans 7.0.app/Contents/Resources/NetBeans/ExecutableJavaEnv/java If you want to omit the …/NetBeans 7.0.app part, first switch to the directory and run find on the current directory (.). There'll still be a ./ prefix. cd '/Applications/NetBeans/NetBeans 7.0.app/' find . -name java -type f -perm -u+x Strictly speaking, -perm u+x selects all files that are executable by their owner, not all files that you can execute. GNU find has a -executable option to look for files that you have execute permission on, taking all file modes and ACLs into account, but this option isn't available on other systems such as OSX. In practice, this is unlikely to matter; in fact for your use case you can forget about permissions altogether and just match -name java -type f. -type f selects only regular files, not directories or symbolic links. If you want to include symbolic links to regular files in the search, add the -L option to find (immediately after the find command, before the name of the directory to search).
How to grep recursive UNIX tree results along with each tree node?
1,494,218,857,000
I’ve made a script that would work on rhel distros and forks. It’s for personal use to automatically download repositories and software that I use. When I make the script executable on the host machine I can right click on the script and choose run as a program. When I copy the script to a flash drive and then copy it from a flash drive to another computer running the same operating system I have to make it executable again to give back the function to right click and run as a program. There are obvious workarounds to still use the script but being able to right click and run as a program is the most streamlined and useful for what my script is doing. So how do I make my script keep that functionality when I transfer it to another pc via usb?
When I copy the script to a flash drive and then copy it from a flash drive to another computer running the same operating system I have to make it executable again to give back the function to right click and run as a program. Execute permissions are not preserved when you copy files to and from the flash drive because the file system on the drive does not support unix-style permissions. Most likely, the flash drive is formatted with exFAT or vFAT. Potential solutions: Format the drive with a Linux file system, like Ext2/3/4 or XFS. There too many to list all of them here. This is the only viable solution if you want to run the script directly from the USB drive. Use a container that supports Linux permissions, like tar, to hold the file while it is on the drive. zip also supports Linux permissions to an extent. 7z does not. Bypass the USB drive by transferring the files over the network, using tools like scp and rsync.
How to make my script stay executable on different devices?
1,494,218,857,000
It would be handy to be able to start Java programs by just calling the class file from the terminal (and have it running it the terminal when double-clicked in GUI, but this is less important). I so far only help myself with the following ad hoc fix: alias cs='java charstat.Charstat' Linux recognises the files as Java: charstat/Charstat.class: compiled Java class data, version 52.0 (Java 1.8) So is there a way to have the call hard-wired? Ubuntu 16.04 here, but general answers welcome. End of the question, troubleshooting section starts. UPDATE So far the most promising line of action is the one proposed by Gilles. So I ran: echo ":java-class:M:0:cafebabe::/usr/bin/java:" | sudo tee /proc/sys/fs/binfmt_misc/register Now, tomasz@tomasz-Latitude-E4200:~/Desktop$ cat /proc/sys/fs/binfmt_misc/java-class enabled interpreter /usr/bin/java flags: offset 0 magic 6361666562616265 But, tomasz@tomasz-Latitude-E4200:~/Desktop$ ./Void.class bash: ./Void.class: cannot execute binary file: Exec format error This was done on Ubuntu 14.04. It has: binfmt-support/trusty,now 2.1.4-1 amd64 [installed,automatic] Support for extra binary formats which I think was not installed automatically on 16.04 if that matters. Yesterday already, following Mark Plotnick's comment, I wrestled with this guide, to no avail. It introduced a wrapper at /usr/local/bin/javawrapper, which Gilles' solution doesn't contain. That's for Arch Linux though. UPDATE 2 (Ubuntu 16.06) On 16.06: tomasz@tomasz-Latitude-E4200:~/Desktop/io$ cat /proc/sys/fs/binfmt_misc/Java enabled interpreter /usr/local/bin/javawrapper flags: offset 0 magic cafebabe And, tomasz@tomasz-Latitude-E4200:~/Desktop/io$ ./Nain.class bash: ./Nain.class: No such file or directory UPDATE 3 After echo ":java-class:M:0:\xca\xfe\xba\xbe::/usr/bin/java:" | sudo tee /proc/sys/fs/binfmt_misc/register : tomasz@tomasz-Latitude-E4200:~/Desktop/io$ java Main Please input the file location and name. ^Ctomasz@tomasz-Latitude-E4200:~/Desktop/io$ ./Main.class Error: Could not find or load main class ..Main.class For the record: tomasz@tomasz-Latitude-E4200:/proc/sys/fs/binfmt_misc$ cat java-class enabled interpreter /usr/bin/java flags: offset 0 magic cafebabe
Linux Ubuntu already does this for a jar. With the openjdk-8-jre package (and earlier versions), executing a jar invokes jexec on it. This isn't done for a class, maybe because it's rare for a class to be a standalone executable rather than a library (then again, that's also the case for jars). You can configure the same underlying mechanism to handle classes. That mechanism is the Linux binfmt_misc feature, which allows the kernel to execute arbitrary files via a helper program. Because Java's command line is really weird, you need to go through a wrapper to convert the file name into something that the java command is capable of executing. Save this script as /usr/local/bin/javarun and make it executable: #!/bin/sh case "$1" in */*) dir="${1%/*}"; base="${1##*/}";; *) dir="."; base="$1";; esac shift case "$base" in [!-]*.class) base="${base%.*}";; *) echo >&2 "Usage: $0 FILENAME.class [ARGS...]"; exit 127;; esac case "$CLASSPATH" in "") exec java -cp "$dir" "$base" "$@";; *) exec java -cp "$dir:$CLASSPATH" "$base" "$@";; esac The following command should to the trick for classes. See https://unix.stackexchange.com/a/21651 and the kernel documentation for explanations. echo ":java-class:M:0:\xca\xfe\xba\xbe::/usr/local/bin/javarun:" | sudo tee /proc/sys/fs/binfmt_misc/register Run this once to enable it. To remove a setting, run sudo rm /proc/sys/fs/binfmt_misc/java-class. Once you're happy with the setting, add the following command to /etc/rc.local: echo >/proc/sys/fs/binfmt_misc/register ":java-class:M:0:\xca\xfe\xba\xbe::/usr/local/bin/javarun:" Zsh If you use zsh as your shell, you can make it execute files based on their extension, by defining a suffix alias. alias -s class=javarun You need the same javarun script as above. Of course this only works in zsh, not in bash, file managers, scripts (except zsh scripts where this command has run), ...
Getting Java programs start without calling with Java
1,494,218,857,000
I'm wondering if $PATH cascades entries. You'll all need to take a leap of faith with me here, but here it goes. Let's say we have a Java executable at /usr/bin/java but this version is very old and outdated. Unfortunately, we don't have su access so we can't just replace it. We can, however, download the current version of the JRE/JDK locally and point to the updated version. My question is, how does bash handle the case where we have two or more executables with the same name but in two or more different locations? Does bash somehow choose which one to execute when we type java into the console? Assuming /usr/bin has many other executables that we need, how would the $PATH look for something like this to work correctly? Ideally, when we type java -version we should see: java version "1.8.0_45" Java(TM) SE Runtime Environment (build 1.8.0_45-b14) Java HotSpot(TM) 64-Bit Server VM (build 25.45-b02, mixed mode) instead of java version "1.7.0_45" Java(TM) SE Runtime Environment (build 1.7.0_45-b18) Java HotSpot(TM) Client VM(build 24.45-b08, mixed mode, sharing) I'm sure this question has been asked before and has some type of jargon associated with it. I've poked around SE, SO, and some forums but didn't find anything conclusive.
Your $PATH is searched sequentially. For example if echo $PATH shows /usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin, each of those directories is searched in sequence for a given command (assuming the command isn't an alias or a shell builtin). If you want to override specific binaries on a per-user basis (or you just don't have access to override for other users than yourself), I would recommend creating a bin directory in your home directory, and then prefixing your PATH variable with that directory. Like so: $ cd ~ $ pwd /home/joe $ mkdir bin $ echo "$PATH" /usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin $ echo 'export PATH="$HOME/bin:$PATH"' >> .bash_profile Then source .bash_profile so the new PATH definition will take effect (or just log out and log in, or restart your terminal emulator). $ source .bash_profile $ echo "$PATH" /home/joe/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin Now, any executable files you put in /home/joe/bin/ will take precedence over system binaries and executables. Note that if you do have system access and the overrides should apply to all users, the preferred place to put override executables is /usr/local/bin, which is intended for this purpose. In fact often /usr/local/bin is already the first directory in $PATH specifically to allow this.
Handling duplicate programs names bash
1,494,218,857,000
Trying to run an executable file on terminal (I am using Tails live OS), but I keep receiving an error message. I have set permissions already. The command I wrote: sudo ./home/amnesia/myfile I receive "Command not found"? I tried running it with or without sudo: $ sudo /home/amnesia/myfile sudo: unable to execute /home/amnesia/myfile: No such file or directory $ /home/amnesia/myfile bash: /home/amnesia/myfile: No such file or directory Information about the file (it's a binary, not a script): $ ls -l /home/amnesia/myfile -rwxrwxrwx 1 amnesia amnesia 15327 Sep 3 2013 /home/amnesia/myfile $ file /home/amnesia/myfile ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared lies), for GNU/Linux 2.6.9, not stripped Information about my system: $ uname -a Linux amnesia 3.16-3-amd64 #1 SMP Debian 3.16.5-1 (2014-10-10) x86_64 GNU/Linux $ file /bin/ls /bin/ls: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.26, BuildID[sha1]=0xd3280633faaabf56a14a26693d2f810a32222e51, stripped
$ /home/amnesia/myfile bash: /home/amnesia/myfile: No such file or directory $ file /home/amnesia/myfile ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared lies), for GNU/Linux 2.6.9, not stripped So myfile exists, but running it gives the message “No such file or directory”. This happens in the following circumstance: The file depends on a loader — it's a dynamically linked executable, and these need a loader program to load the dynamically linked libraries. (The loader can also be the interpreter designated by a shebang line, but bash detects this case and gives a different error message.) The loader file is not present. The message “No such file or directory” is really about the loader, but the shell doesn't know that the loader is involved, so it reports the name of the original file. I explain this in more detail in “No such file or directory” lies on Optware installed binaries. Why can't you run this program? Because you don't have the dynamic loader for 64-bit executables. $ uname -a Linux amnesia 3.16-3-amd64 #1 SMP Debian 3.16.5-1 (2014-10-10) x86_64 GNU/Linux $ file /bin/ls /bin/ls: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.26, BuildID[sha1]=0xd3280633faaabf56a14a26693d2f810a32222e51, stripped Your system has a 64-bit kernel, but the rest of the system is 32-bit. Linux supports this configuration (a 64-bit kernel can run both 64-bit programs and 32-bit programs, but a 32-bit kernel can only run 32-bit programs). The kernel can load the program just fine; you would be able to run a statically-linked amd64 executable. However, you don't have the 64-bit loader (/lib64/ld-linux-x86-64.so.2), nor presumably any 64-bit library. So you can't run dynamically-linked amd64 executables. Why would you run a 64-bit kernel with a 32-bit userland? To use more than about 3GB of physical memory. (This isn't the only way — another possibility is to run a 32-bit kernel that supports PAE.) To be able to run 64-bit binaries, e.g. by booting on the live OS and then chrooting into an installed 64-bit system somewhere. To reduce maintenance effort for the distribution: provide a single kernel for recent hardware, and make it 64-bit. To run 64-bit virtual machines (some VM engines require a 64-bit kernel to run a 64-bit VM). I don't think Tails provides a 64-bit system. You should get a 32-bit version of the executable. If you can't, use some other distribution (possibly in a virtual machine).
/home/amnesia/myfile: command not found — 64-bit executable, 64-bit kernel
1,494,218,857,000
I set some additional locations to the PATH environment variable in my ~/.bashrc so that these are included/sourced in logins and non-interactive scripts that are scheduled with cron. I've noticed though that on one system the PATH is modified correctly, but none of the scripts within will run despite ownership and permissions being set correctly (as far as I can tell). $ ls -l total 756 -rw-r-xr-x 1 slackline slackline 300 Sep 6 07:35 backup -rwxr-xr-x 1 slackline slackline 978 Dec 30 10:28 bbc_mpd -rwxr-xr-x 1 slackline slackline 355483 Nov 29 07:31 get_iplayer -rwxr-xr-x 1 slackline slackline 110 Sep 6 07:35 rsync.albums -rwxr-xr-x 1 slackline slackline 114 Sep 6 07:35 rsync.climbing -rwxr-xr-x 1 slackline slackline 108 Sep 6 07:35 rsync.films -rwxr-xr-x 1 slackline slackline 125 Sep 6 07:35 rsync.mixes -rwxr-xr-x 1 slackline slackline 117 Sep 6 07:35 rsync.pics -rwxr-xr-x 1 slackline slackline 117 Sep 6 07:35 rsync.torrents -rwxr-xr-x 1 slackline slackline 95 Sep 6 07:35 rsync.work The contents of one of my scripts, which synchronizes a directory to my NAS to back it up: $ cat ~/bin/rsync.work #!/bin/bash source ~/.keychain/$HOSTNAME-sh /usr/bin/rsync -avz /mnt/work/* readynas:~/work/. which fails to run when called: $ rsync.work bash: /home/slackline/bin/rsync.work: Permission denied but works when preceeded with bash -x : $ bash -x /home/slackline/bin/rsync.work + source /home/slackline/.keychain/kimura-sh ++ SSH_AUTH_SOCK=/tmp/ssh-P3GL1A3Juwhe/agent.4209 ++ export SSH_AUTH_SOCK ++ SSH_AGENT_PID=4210 ++ export SSH_AGENT_PID + /usr/bin/rsync -avz /mnt/work/android /mnt/work/arch /mnt/work/classes /mnt/work/doc /mnt/work/linux /mnt/work/lost+found /mnt/work/nc151.tar /mnt/work/nc152now-11.rar /mnt/work/personal /mnt/work/ref /mnt/work/scharr 'readynas:~/work/.' sending incremental file list sent 1,176,907 bytes received 19,786 bytes 30,296.03 bytes/sec total size is 27,852,538,230 speedup is 23,274.59 $ set -x ; ~/bin/rsync.work ; set +x + /home/slackline/bin/rsync.work bash: /home/slackline/bin/rsync.work: Permission denied + set +x $ set -x ; bash -x ~/bin/rsync.work ; set +x + bash -x /home/slackline/bin/rsync.work + source /home/slackline/.keychain/kimura-sh ++ SSH_AUTH_SOCK=/tmp/ssh-P3GL1A3Juwhe/agent.4209 ++ export SSH_AUTH_SOCK ++ SSH_AGENT_PID=4210 ++ export SSH_AGENT_PID + /usr/bin/rsync -avz /mnt/work/android /mnt/work/arch /mnt/work/classes /mnt/work/doc /mnt/work/linux /mnt/work/lost+found /mnt/work/nc151.tar /mnt/work/nc152now-11.rar /mnt/work/personal /mnt/work/ref /mnt/work/scharr 'readynas:~/work/.' sending incremental file list sent 1,174,755 bytes received 19,786 bytes 39,165.28 bytes/sec total size is 27,852,538,230 speedup is 23,316.52 + set +x My ~/.bashrc has the following line in it. $ grep PATH ~/.bashrc # Additions to system PATH PATH="/home/slackline/bin:$PATH:/usr/local/stata/:/usr/local/stattransfer/" export PATH And I can run the rsync command at the command line myself (so it's not a case of permission being denied on the SSH connection). $ /usr/bin/rsync -avz /mnt/work/* readynas:~/work/. sending incremental file list sent 1,176,723 bytes received 19,786 bytes 32,781.07 bytes/sec total size is 27,852,538,230 speedup is 23,278.17 (Backup is obviously up to date). The version of Bash installed is: $ eix -Ic bash [I] app-admin/eselect-bashcomp (1.3.6@08/29/13): Manage contributed bash-completion scripts [I] app-shells/bash (4.2_p45@08/16/13): The standard GNU Bourne again shell [I] app-shells/bash-completion (2.1@08/28/13): Programmable Completion for bash [I] app-shells/gentoo-bashcomp (20121024@08/28/13): Gentoo-specific bash command-line completions (emerge, ebuild, equery, repoman, layman, etc) Found 4 matches. The permissions on the directory (and its structure) are: $ ls -l ~/ | grep bin drwxr-xr-x 2 slackline slackline 4096 Dec 30 10:29 bin $ stat -c"%n (%U) %a" / /home /home/slackline /home/slackline/bin / (root) 755 /home (root) 755 /home/slackline (slackline) 755 /home/slackline/bin (slackline) 755 And an strace shows $ strace rsync.work strace: Can't stat 'rsync.work': No such file or directory $ echo $PATH /home/slackline/bin:~/bin:/usr/local/bin:/usr/bin:/bin:/opt/bin:/usr/x86_64-pc-linux-gnu/gcc-bin/4.8.2:/usr/games/bin:/usr/local/stata/:/usr/local/stattransfer/:/usr/local/stata/:/usr/local/stattransfer/ $ ls -l ~/bin/ | grep work -rwxr-xr-x 1 slackline slackline 95 Sep 6 07:35 rsync.work $ rsync.work bash: /home/slackline/bin/rsync.work: Permission denied I can't work out what's going wrong here and would be grateful of any thoughts/ideas on how to trouble shoot this. EDIT : Tidied up the various edits made in response to questions to hopefully read a bit more coherently and make it easier to follow what I'd tried and how it fits in with Mark Plotnick's solution.
You mentioned in the comments that your home directory's filesystem is mounted with the users mount option. $ grep home /etc/fstab LABEL=home /home ext4 noatime,users 0 4 – The users mount option implies noexec. From mount(8): users Allow every user to mount and unmount the filesystem. This option implies the options noexec, nosuid, and nodev (unless overridden by subsequent options, as in the option line users,exec,dev,suid).
permission denied on scripts in ~/bin
1,494,218,857,000
I am trying to write a script that will attempt to find if a certain program is installed. Lets say that the program is called, myprog. The problem is that the program can be named in different formats such as, 'prefix-myprog', 'myprog', and 'prefix_myprog'. If I use: which myprog then the command line will return the correct location only if it is named EXACTLY, myprog. Is there a way that I can locate all possible instances with a wildcard of sorts? Thanks
find /bin /sbin /usr -type f | grep -i myprog Find all files in directories /bin, /sbin and /usr, then filter on 'myprog'. man find man grep apropos myprog can be useful too. man apropos or what about locate -r myprog? man locate
Finding program name by wildcard pattern
1,494,218,857,000
I want to know, if I can use the binary of a program without modification on the three systems? After all they are all Unices. I talk about the same architecture.
No, you cannot, as the ABIs differ. Some BSDs do have binary compability with Linux binaries, with some caveats (enabling virtual 8086 mode is a common issue). Often you may need to patch the source, however, as many binaries will make assumptions about their environment based on the fact that the source is developed for Linux. As far as I am aware there is no BSD-binary compatibility in the Linux kernel at this time. Andrey Sokolov is working on providing Linux binary support on Illumos without zones, but as far as I am aware there is no BSD-binary compability on Illumos that is planned at this time.
Can I use the same binary on Linux, *BSD and Illumos?
1,494,218,857,000
After reading about removing the execute permission from chmod, I got curious. Is it possible to recover from removing the execute permission from ld-linux.so without rebooting if I haven't yet exited bash? Every command appears to stop functioning.
You would need a statically linked (or already running) utility that can do a chmod operation. If you had a statically linked BusyBox or a similar emergency shell installed, that would probably do it. In some old distributions, the basic package management utility (e.g. dpkg or rpm) used to be statically linked to enable libc and loader upgrades. Nowadays there are apparently other ways to do that. But if your package management utility happened to be statically linked and the package containing ld-linux would be still in the cache directory of the package management tools, you might be able to force-reinstall the ld-linux package and fix it that way.
Recovering from removing execute permission from ld-linux.so
1,494,218,857,000
I want to put a bunch of executable scripts in the .command dir (which is also executable), and then only have to source that directory in my .bash_profile. Is this possible? I can get this to work with one file. But when adding a second file, the second file's commands aren't available in the shell. my .bashprofile source ~/.commands/* my .commands folder -rwxr-xr-x 1 christopherreece staff 108 Dec 14 08:55 server_utils.sh -rwxr-xr-x 1 christopherreece staff 23 Dec 14 09:04 short contents of short echo 'a short program' contests of server_utils.sh function upfile { scp $1 root@myserveripadress:~/ } Shell input and output. $ hello hello $ short -bash: short: command not found
You can't do that with one source. The first argument is taken as the file name, the others show up as the positional parameters $1, $2... in the sourced script. $ cat test.src echo hello $1 $ source test.src there hello there But you could do it with a loop: for f in ~/commands/*.src; do source "$f" done (As an aside, having stuff like that include only files with a certain extension is quite useful if you use an editor that leaves backup files with a trailing ~. The backup copies don't become accidentally active, then.) Though note that if you have a sourced script that contains plain commands (like that echo above or your short), they'll be executed when the script is sourced. They don't generate any functions in the sourcing shell. $ cat test2.src echo "shows when sourced" func() { echo "shows when function used" } $ source test2.src shows when sourced $ func shows when function used If you want to have executable scripts instead, the kind where the script runs when you give its name as a command, put them somewhere in PATH (I'd suggest using ~/bin for that), give them the execute permission and put proper hashbangs in the beginning of the scripts (#!/bin/sh or #!/bin/bash or whatever)
Can you put multiple executable scripts in one directory, and by sourcing that directory make all of those commands available?
1,494,218,857,000
I'm not sure if this is the best place to ask this - please point me in the right direction if there's a better place. Let's say, hypothetically, that I have two machines - A is a development machine, and B is a production machine. A has software like a compiler that can be used to build software from source, while B does not. On A, I can easily build software from source by following the usual routine: ./configure make Then, I can install the built software on A by running sudo make install. However, what I'd really like to do is install the software that I just built on B. What is the best way to do that? There are a few options that I have considered: Use a package manager to install software on B: this isn't an option for me because the software available in the package manager is very out of date. Install the compiler and other build tools on B: I'd rather not install build tools on the production machine due to various constraints. Manually copy the binaries from A to B: this is error-prone, and I'd like to make sure that the binaries are installed in a consistent manner across production machines. Install only make on B, transfer the source directory, and run sudo make install on B: this is the best solution I've found so far, but for some reason (perhaps clock offsets), make will attempt to re-build the software that should have already been built, which fails since the build tools aren't installed on B. Since my machines also happen to have terrible I/O speeds, transferring the source directory takes a very long time. What would be really nice is if there were a way to make some kind of package containing the built binaries that can be transferred and executed to install the binaries and configuration files. Does any such tool exist?
Using what you have so far and if the makefile is generated with GNU autotools, I would set the target location or install path with ./configure --prefix=/somewhere/else/than/the/usual/usr/local and then run make && make install and finally copy the files from the prefix folder to the usr/ folder in the other machine. This is assuming both machines have the same architecture, if not, then use the according cross toolchain.
Can binaries built from source be installed on a second machine?
1,494,218,857,000
Basically, I want to receive input from 2 different files whem I'm calling an executable on terminal Like: ./a.out < file1.pgm file2.pgm I want to read both the files input on my code, one after another.
For the question, where file1.pgm and file2.pgm are files whose contents you want sent to a.out as input: cat file1.pgm file2.pgm | ./a.out If file1.pgm and file2.pgm are executables that produce output for a.out: (file1.pgm; file2.pgm) | ./a.out
How to receive input from 2 files on an executable
1,494,218,857,000
Possible Duplicate: Getting “Not found” message when running a 32-bit binary on a 64-bit system ts3user@...:~/ts3$ dir CHANGELOG LICENSE doc ... ts3server.pid ts3server_linux_x86 ts3server_minimal_runscript.sh ts3server_startscript.sh tsdns ts3user@...:~/ts3$ ./ts3server_linux_x86 sh: ./ts3server_linux_x86: No such file or directory As you can see, dir command reports existence of teamspeak executable. However, when I try to launch it, it states that the file does not exist. What is that? I did chmod 0777 to that directory and chomd 0755 to ts3server_linux_x86.
Teamspeak has two server package:"Server amd64" or "Server x86" You try to execute the 32 bits version, and I guess your linux is 64 bits. Two solutions: download the 64 bits package install the ia32 libs to be able to run 32 bits binaries: sudo apt-get install ia32-libs
Linux isn't sure whether a file exists or not [duplicate]
1,494,218,857,000
my platform: SOC = STM32H743 (ARMv7E-M | Cortex-M7) Board = Waveshare CoreH7XXI Linux Kernel = 5.8.10 (stable 2020-09-17) initial defconfig file = stm32_defconfig rootfs = built using busybox | busybox compiled using arm-linux-gnueabihf-gcc I've created rootfs by following this guide. my kernel cannot execute any file even the init file >>> /linuxrc or /sbin/init. for making sure that the problem is not from busybox files, I wrote a C helloworld program with -mcpu=cortex-m7 flag and compiled it with arm-linux-gnueabi-gcc but again the kernel paniced and throwed the -8 error (Exec format error). my busybox files are all linked to the busybox binary and the binary is correctly compiled for 32bit arm: $ readelf -A bin/busybox Attribute Section: aeabi File Attributes Tag_CPU_name: "Cortex-M7" Tag_CPU_arch: v7E-M Tag_CPU_arch_profile: Microcontroller Tag_ARM_ISA_use: Yes Tag_THUMB_ISA_use: Thumb-2 Tag_ABI_PCS_wchar_t: 4 Tag_ABI_FP_rounding: Needed Tag_ABI_FP_denormal: Needed Tag_ABI_FP_exceptions: Needed Tag_ABI_FP_number_model: IEEE 754 Tag_ABI_align_needed: 8-byte Tag_ABI_align_preserved: 8-byte, except leaf SP Tag_ABI_enum_size: int Tag_CPU_unaligned_access: v6 the kernel error: [ 0.925859] Run /linuxrc as init process [ 0.943257] Kernel panic - not syncing: Requested init /linuxrc failed (error -8). [ 0.950654] ---[ end Kernel panic - not syncing: Requested init /linuxrc failed (error -8). ]--- my helloworld program: $ readelf -A hello Attribute Section: aeabi File Attributes Tag_CPU_name: "7E-M" Tag_CPU_arch: v7E-M Tag_CPU_arch_profile: Microcontroller Tag_ARM_ISA_use: Yes Tag_THUMB_ISA_use: Thumb-2 Tag_ABI_PCS_wchar_t: 4 Tag_ABI_FP_rounding: Needed Tag_ABI_FP_denormal: Needed Tag_ABI_FP_exceptions: Needed Tag_ABI_FP_number_model: IEEE 754 Tag_ABI_align_needed: 8-byte Tag_ABI_align_preserved: 8-byte, except leaf SP Tag_ABI_enum_size: int Tag_CPU_unaligned_access: v6 the kernel error: [ 1.189550] Run /hello as init process [ 1.198670] Kernel panic - not syncing: Requested init /hello failed (error -8). [ 1.205977] ---[ end Kernel panic - not syncing: Requested init /hello failed (error -8). ]--- Why the kernel can't execute binaries?
the problem is that you are compiling it in a normal static elf format. you should compile it as an FDPIC-ELF executable (because you need a position independent executable (FDPIC) due to the lack of MMU). FDPIC ELF is not ET_EXEC type. it is ET_DYN (it means it's shared) type and it is loaded by the Linux dynamic loader. just add a -mfdpic flag to it and turn off the built static binary in the busybox's kconfig menu. note that -mfdpic flag is on by default in arm-uclinux-fdpicabi toolchains.
kernel cannot execute binaries (error -8)
1,494,218,857,000
I was reading about setuid on Wikipedia. One of the examples goes as follows: 4700 SUID on an executable file owned by "root" A user named "tails" attempts to execute the file. The file owner is "root," and the permissions of the owner are executable—so the file is executed as root. Without SUID the user "tails" would not have been able to execute the file, as no permissions are allowed for group or others on the file. A default use of this can be seen with the /usr/bin/passwd binary file. I do not understand this. How can user "tails" execute this file at all, since he is not the owner of the file, and group and other permissions are not available? I tried to recreate this scenario, and indeed: $ su -c 'install -m 4700 /dev/null suidtest' $ ls -l suidtest -rws------ 1 root root 0 21 dec 07:48 suidtest* $ ./suidtest bash: ./suidtest: Permission denied I only got this working with permissions of 4755. Also, the default use mentioned in the example on Wikipedia (the /usr/bin/passwd) has in fact 4755 permissions. Is the example correct and am I missing something, or is this a mistake?
You are right and the Wikipedia article is wrong. See the below for an example: $ ls -l /usr/bin/passwd -rwsr-xr-x. 1 root root 30768 Feb 22 2012 /usr/bin/passwd $ sudo cp /usr/bin/passwd /tmp/ $ cd /tmp $ ls -l passwd -rwxr-xr-x 1 root root 30768 Dec 21 07:43 passwd $ sudo chmod 4700 passwd $ ls -l passwd -rws------ 1 root root 30768 Dec 21 07:43 passwd $ ./passwd bash: ./passwd: Permission denied $ sudo chmod 4701 passwd $ ./passwd Changing password for user vagrant. Changing password for vagrant. (current) UNIX password: $
setuid example from Wikipedia: 4700
1,494,218,857,000
I created a desktop entry and moved it to ~/.local/share/applications, but it doesn't execute when I click on it from the start menu. This is my .desktop file: [Desktop Entry] Type=Application Categories=Game Name=Minecraft Icon=/home/user/Games/Minecraft/Minecraft-icon.png Exec=/home/user/Games/Minecraft/Minecraft.jar How do I have to modify it to run the .jar file from the start menu?
You have to specify that you want to run .jar file. Normally jar are not treated as executable so for Exec part you have to add java -jar like this: Exec=java -jar /home/user/Games/Minecraft/Minecraft.jar
How to create a desktop entry for .jar files?
1,494,218,857,000
As of OpenBSD 6.0 mandatory W^X enforcement is implemented. Binaries that need permission to violate this rule can be marked with the ld command: Identify W^X labelled binaries at execve(2) time based upon the WX_OPENBSD_WXNEEDED flag set by ld -zwxneeded. I tried: ld -b <binary> -zwxneeded ld <binary> -zwxneeded # ld -b sbcl -zwxneeded ld: no input files # ld sbcl -zwxneeded sbcl: could not read symbols: File format not recognized I've been reading the ld man page but can't figure out the right syntax for file I/O to set the required flag. Any help/advice is highly appreciated.
Found the answer after rereading openBSD upgrade guide, filesystem mount options have to be adjusted in fstab. The wxallowed mount option. W^X is now strictly enforced by default; a program can only violate it if it is located on a filesystem mounted with the wxallowed mount(8) option. This allows the base system to be more secure as long as /usr/local is a separate filesystem. The base system has no W^X-violating programs, but the ports tree contains quite a few: chromium, mono, node, gnome, libreoffice, jdk, zeal, etc. If you want to run any of these ports on a regular basis, you need to add wxallowed to the mount options for /usr/local in fstab(5), e.g.: 01020304050607.h /usr/local ffs rw,nodev,wxallowed 1 2 Small disks may not have a separate partition for /usr/local. In that case, add wxallowed to the smallest partition containing it: /usr or /. Starting a W^X-violating program from a partition without the wxallowed mount option will produce a core dump and the dmesg(8) will contain an entry such as soffice.bin(15529): mprotect W^X violation. You can temporarily allow W^X-violating ports by issuing mount -uo wxallowed /usr/local.
Mark binaries writable and executable in openBSD
1,494,218,857,000
I'm trying to execute some binary with bash. I am getting a "Permission denied" message despite having given the full privileges (chmod 777) and being the 'root' user: This is the file description: -rwxrwxrwx 1 root root 641K Aug 22 15:04 wrapid This is the error message: bash: ./wrapid: Permission denied Output of strace ./wrapid: execve("./wrapid", ["./wrapid"], [/* 13 vars */]) = -1 EACCES (Permission denied) write(2, "strace: exec: Permission denied\n", 32strace: exec: Permission denied ) = 32 exit_group(1) = ? +++ exited with 1 +++ Output of ldd ./wrapid: /usr/bin/ldd: line 104: lddlibc4: command not found not a dynamic executable Output of file wrapid: wrapid: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, BuildID[sha1]=0x817251da41b3c8684a68f6f4afa1b4cd8f116072, not stripped Output of uname -a: Linux WR-IntelligentDevice 3.4.43-grsec-WR5.0.1.7_standard #2 SMP PREEMPT Thu Aug 22 16:27:28 CST 2013 i686 GNU/Linux
According to the info provided you are trying to run 64-bit executable on 32-bit kernel. It won't work that way. You either need 32-bit binary or 64-bit kernel/glibc libraries.
permission denied when executing binary despite "rwx" privilege and root user
1,494,218,857,000
I have this simple Bash script: #!/bin/bash java -jar ClosureCompiler/compiler.jar --js ../src/typescript.js --js ../src/ts-compiler.js --js_output_file TSCompiler.js I'm getting this error when I try to run the script using build.sh in the MSYS environment under Windows (64-bit!): ./build.sh: ./build.sh: cannot execute binary file But the command itself works if I type it directly into the command line window!
The file was encoded in UCS-2 Little Endian ! Changing the encoding to UTF-8 without BOM resolved the issue.
Calling java from Bash: "Cannot execute binary file"
1,494,218,857,000
A C# file in mono can be compiled using gmcs command. This will create a hello.exe file. $ gmcs hello.cs $ ls hello.cs hello.exe $ ./hello.exe Hello from Mono! To generate a linux executable, I tried this command, but it generates the error: $ gmcs /t:exe hello.cs /out:hello Unhandled Exception: System.ArgumentException: Module file name 'hello' must have file extension. I want create a standalone executable so that I can execute it simply run it by saying and I get the desired output: $ ./hello Hello from Mono! I searched and found a solution which mentions of a tool called mkbundle: $ mkbundle -o hello hello.exe --deps Sources: 1 Auto-dependencies: True embedding: /home/ed/Projects/hello_world/hello.exe embedding: /mono/lib/mono/1.0/mscorlib.dll Compiling: as -o /tmp/tmp54ff73e6.o temp.s cc -o hello -Wall temp.c `pkg-config --cflags --libs mono` /tmp/tmp54ff73e6.o Done $ ls -l total 3 -rwxr-xr-x 1 ed users 1503897 2005-04-29 11:07 hello -rw-r--r-- 1 ed users 136 2005-04-29 11:06 hello.cs -rwxr-xr-x 1 ed users 3072 2005-04-29 11:06 hello.exe This utility does not seem to exist in my Mono install. I found that this is available in mono-devel package. To install this package meant installing around 82 other packages. My goal was to keep my mono install minimal until sometime. Is there a way to install mkbundle standalone?
I was very impatient and felt that the package mono-2.0-devel might have mkbundle. So I went ahead and installed mono-2.0-devel which needed only 18 additional packages. When I typed mkb and hit tab, it showed me mkbundle2. I tried: $ mkbundle2 -o hello hello.exe --deps OS is: Linux Sources: 1 Auto-dependencies: True embedding: /home/minato/Projects/Practice/mono/hello.exe embedding: /usr/lib/mono/2.0/mscorlib.dll Compiling: as -o temp.o temp.s cc -ggdb -o hello -Wall temp.c `pkg-config --cflags --libs mono` temp.o Done $ ls hello hello.cs hello.e hello.exe $ ./hello Hello from Mono! This was what I needed in the first place. Thanks to the command-not-found tool.
Generating a Linux executable with Mono with mkbundle
1,494,218,857,000
I've just tried installing tmux from source (via installing libevent first). The installation seemed fine, without throwing any obvious error. But when I typed tmux in iTerm2, it returned "command not found". However, there is clearly an executable named tmux in /opt/bin/. So I am a bit puzzled that whether I have successfully installed tmux on my mac. How do I get it work with iTerm2?
When you type tmux in a shell, the shell looks for an executable called tmux in one of the directories enumerated in the PATH variable (it's a colon-separated list of directories). Check if /opt/bin is in your path: echo $PATH If /opt/bin is not in your path, then either install tmux in a different directory that is in your path, or add /opt/bin to your path. The usual place to set the PATH variable is in ~/.profile, or in ~/.bash_profile if you have that but no ~/.profile, or in ~/.zprofile if your shell is zsh. If /opt/bin is in your path, what's happening is that your shell is keeping the path contents in a cache in memory and not noticing the new addition. Run hash -r to rebuild the cache in this shell. Each shell instance builds its own cache, so you won't have this problem in shells that you start after the installation of tmux.
How to verify if tmux is properly installed on Mac OSX
1,494,218,857,000
I've installed PgSQL 9.1.2 from PostgreSQL repositories and all it's fine except that I can't execute command from every path in my OS. For example suppose that I want to run the command pg_dump: for that I need to change from (actual path) to /usr/pgsql-9.1/bin and then execute as ./pg_dump even if I'm root user. I think in make a symlink for each executable under /usr/pgsql-9.1/bin in /bin but I don't know if this is the best way. Also I think in add this PATH="/usr/pgsql-9.1/bin:$PATH" to /.bashrc but didn't know the right way to do this. Any help on this?
Just open your .bashrc and add the following lines in the end: PATH=$PATH:/usr/pgsql-9.1/bin export PATH
Access PgSQL executables from anywhere
1,494,218,857,000
I am running a python script named myscript.py on ubuntu machine. I usually use python command to run python scripts as below. python main.py Recently, I downloaded a python script from a Github repository ( if need to look into the repository can find it at https://github.com/gsrivas4/mnist-gan) which asks to run script using './' as below. ./main.py Running python scripts second way is new to me. I am confused about when can we use './' to run scripts and is this method to run scripts used for other languages as well. Usually, I would expect name of binary such as python which will start a process before I add the name of script. This script will be fed to the process. Also, I want to understand what is the meaning of './' when we run scripts. I feel this is trivial question, but could not find much help online. I also tried making one of my python file executable and then ran it. However, running it using ./ gave me errors for any python library import commands.
./ is simply a relative path indicating the current working directory. When executing a file that is not in your PATH it's necessary to prefix it with either the full path or a relative path, ./ is the most simple method of doing this, but it would also work if you used a full path like /path/to/script.py The reason your python script gets errors when you execute it as: ./script.py rather than: python script.py is because you do not have a hashbang(shebang) interpreter line at the top telling it which interpreter to use when executing the script. It's likely trying to execute it with bash or whatever shell you are using to execute the script. (See Which shell interpreter runs a script with no shebang?) To get your script to execute properly using python add the following to the first line in the script: #!/usr/bin/env python
Running python script on ubuntu machine using ./myscript.py
1,494,218,857,000
I have a binary code and I want to run it. 01001000 01100101 01101100 01101100 01101111 00100000 01010111 01101111 01110010 01101100 01100100 How can I create a file "application/x-executable" and execute it on Debian?
That's just the binary representation of the ascii encoding of "Hello World", not an executable, there's no way to execute that.
Execute binary code
1,494,218,857,000
I'm setting up prometheus in a web server, and I noticed that each exporter is its own program that must be added to a directory in $PATH. My question is, is there any advantage to making a specialized directory for these (for example, "/usr/exporters/bin", to make up some example) and put all exporter programs in there, and add that file to the $PATH? Or is it best to just push the programs to the default directory for housing binaries?
The only benefit is having fewer directories in $PATH, therefore, fewer directories to search when looking for an executable, but: This event (searching all the directories in $PATH) is rare. $PATH entries (the executables) are kept in a hash table within bash, which is updated at startup or via rehash. No need to search $PATH every time. This event isn't expensive. All the information needed (file exists and permissions allow eXecution) can be gathered from the file's directory entry - no need to access each file. Just read the directory. The reason for NOT moving the executables to a common other directory include: You'll have a non-standard environment. When you ask for help, extra effort will be needed to explain this. Problems caused specifically by the non-standard environment will be very difficult to solve. You'll have a non-standard environment. When updated versions are released, you environment won't match what the update expects. You'll have a non-standard environment. You'll have to remember and do the non-standard environment updates this week, next week, the week after that, ... forever. It's Monkey Motion for no benefit.
Is there any benefit to grouping similar programs into a single path directory?
1,494,218,857,000
Whenever I create or copy few shell files to usb storage device, then I am not able to make them executable. If I create test.sh, it's default file permission will be 644, but when I execute chmod 777 test.sh no error reports and echo $? also returns "0". But still ls -l shows permission as 644 and I can not execute it as ./test.sh
Yes, this can occur if your device is formatted with a filesystem that does not support that kind of permission setting, such as VFAT. In those cases, the umask is made up on the fly from a setting in the fstab (or the hotplugging equivalent). See, most probably, man mount for details. For example, for VFAT, we find: Mount options for fat uid=value and gid=value Set the owner and group of all files. (Default: the uid and gid of the current process.) umask=value Set the umask (the bitmask of the permissions that are not present). The default is the umask of the current process. The value is given in octal. etc.
can't change file permission
1,494,218,857,000
I am running a quite complicated script which changes directories and runs many other commands. All these commands are run using 'scriptname', which works fine when I execute the main script from my terminal. However, sometimes I have to ssh into a server and run the main script from there, it fails as there isn't a ./ before each command. I'd rather not go through all the scripts and executables and add a ./ to the commands, so is there another way to solve this problem?
There are ways to change this behavior including adding ./ to your PATH environment variable, but this introduces a serious security risk to your environment. The way your scripts are written is really wrong and the correct solution is to go through all of them and fix the way local scripts are called. This is the only proper fix that will not introduce extra problems down the road and create security issues for you. I know it's not what you wanted to hear, but bite the bullet and do it right.
Run script without ./ before the name
1,494,218,857,000
I'm looking to find all executable files that are NOT in my $PATH. Currently I'm doing this find / \( -path "/opt" -prune -o -path "/var" -prune -o -path "/bin" -prune -o -path "/sbin" -prune -o -path "/usr" -prune -o -path "/opt" \) -o -type f -executable -exec file {} \; I feel like there is a better way, I tried using a for loop with IFS=: to separate out the different parts of PATH but couldn't get it to work. Edit: I should have specified I don't want to use a script for this.
Assuming GNU find and the bash shell (as is used in the question), this is a short script that would accomplish what you're trying to do: #!/bin/bash IFS=: set -f args=( -false ) for dirpath in $PATH; do args+=( -o -path "$dirpath" ) done find / \( \( "${args[@]}" \) -o \ \( -type d \( ! -executable -o ! -readable \) \) \) -prune -o \ -type f -executable -exec file {} + This first creates the array args, consisting of dynamically constructed arguments to find. It does this by splitting the value of $PATH on colons, the value that we've given to the IFS variable. The splitting is happening when we use $PATH unquoted in the loop header. Ordinarily, the shell would invoke filename globbing on each of the words generated from the splitting of $PATH, but I'm using set -f to turn off filename globbing, just in case any of the directory paths in $PATH contains globbing characters (these would still be problematic as the -path operand of find would interpret them as patterns). If my PATH variable contains the string /usr/bin:/bin:/usr/sbin:/sbin:/usr/X11R6/bin:/usr/local/bin:/usr/local/sbin then args will be the following list (each line here is a separate element in the array, this is not really a set of strings with newline characters in-between them): -false -o -path /usr/bin -o -path /bin -o -path /usr/sbin -o -path /sbin -o -path /usr/X11R6/bin -o -path /usr/local/bin -o -path /usr/local/sbin This list is slotted into the find command invocation, in parentheses. There is no need to repeat -prune for each and every directory, as you could just use it once as I have above. I've opted for pruning any non-executable or non-readable directory. This ought to get rid of a lot of permission errors for directories that you can't access or list the contents of. Would you want to simplify the find command by removing this bit, use find / \( "${args[@]}" \) -prune -o \ -type f -executable -exec file {} + Also, I'm running file on the found pathnames in batches, rather than once per pathname.
Find all executable files excluding certain directories
1,494,218,857,000
I pressed ENTER after typing the following stupid command on my home directory: find . -type f -exec chmod -x '{}' ';' What do you advise as a fix for this. My guess is that I can't do anything but do something like: find . -type f -exec chmod og+x '{}' ';' Or may be do some tricky stuff based on extensions (which doesn't seem very pertinent under Linux). Or may be some of you has an idea or a pointer on how to know which file should be executable under linux and how to detect them to turn them back to executables...
Here is a script that I wrote a while back to fix permissions on files copied from a FAT system. Won't work if the file names contain newlines (although if someone wants to fix it so that it does, feel free): #!/bin/sh [ $# != 0 ] && dir="$1" || dir=. [ -d "$dir" ] || { echo "usage: $0 [dir]"; exit 1; } cat <<- EOF Will recursively alter permissions under directory '$dir'. Consider backing up permissions with 'getfacl -R $dir' first. Continue? [Y/n]" EOF read reply [ "$reply" = Y ] || exit 0 echo "Changing all directories to mode 755..." find "$dir" -type d -exec chmod 755 {} + # simplest way for now is just to make all files non executable, then fix ones which should be echo "Changing all files to mode 644..." find "$dir" -type f -exec chmod 644 {} + # use a temp file instead of a variable since the shell will strip nulls from the string tmpfile=$(mktemp) # screwed if filename contains a newline - fixable with a better sed script echo "Using magic to find executables..." find $dir -type f -exec file -hN0 -e apptype -e cdf -e compress -e elf -e tar -e tokens {} + | sed -n '/\x0.*executable/p' >"$tmpfile" # ELF binaries echo "\nSetting ELF executables to mode 755...\n" sed '/\x0.*ELF/!d; s/\x0.*$//' "$tmpfile" | xargs -rd '\n' chmod -c 755 scripts=$(sed '/\x0.*text/!d; s/\x0.*$//' "$tmpfile") IFS=" " # only make scripts executable if they have a shebang echo "\nSetting scripts with a shebang to mode 755...\n" for file in $scripts do head "$file" | grep -q '^#!' && chmod -c 755 "$file" done rm "$tmpfile"
How to recover from recursive 'chmod -x' on my home folder
1,356,038,512,000
I am looking for a structured format of version info for OS-level executables, such as in /usr/bin and /usr/local/bin. The problem we are having is the inconsistent architecture between our PROD and TEST environments and we are finding that many executables in our lower environments have had patches applied whereas PROD has not, which invalidates a lot of testing -- stuff works in TEST but doesn't in PROD because of such system discrepancies. So I would like to run a system assurance check to list all executables and get the version number but nothing else and then produce deltas. Some commands don't support the -version option but even those that do display very verbose, free-text format of a version narrative and the version number is buried somewhere within, with no way to extract it programmatically. Alternatively, I was thinking to run a file-level cksum for every executable as an option of last resort but I was hoping that there would be a way to extract version info relevant fields programmatically. Thanks
I'm inclined to say there is no easy way to do this. I say this because versioning is not at all a standarized procedure in UNIX/Linux or with any of the vendors at least at a program level. A suggestion might be to examine the installed package information which does contain versioning information. However, if people install products not using the standard package manager for your distribution, then you'll have faulty information as well. To be absolutely sure, you'll probably have to go with some type of testing checksums between the systems.
Is there a structured format of version info for OS-level executables?
1,356,038,512,000
For example, if I do [OP@localhost executable]$ cat garbage lalala trololol [OP@localhost executable]$ chmod +x garbage [OP@localhost executable]$ ./garbage ./garbage: line 1: lalala: command not found ./garbage: line 2: trololol: command not found Bash seems to be trying to interpret this "executable" as a script. However, there are two instances where this clearly does not happen: when the file begins with a #!, and ELF files. Are there any more? Is there a comprehensive documentation of this somewhere?
Expanding on my previous comment on another answer, the kernel contains seven binary loaders (look for files starting with binfmt_ there, or read the binfmt-specific Kconfig): a.out (which is currently on a stay of execution); ELF; FDPIC ELF (on ARM, MMU-less SuperH, and C6x); em86 (on Alpha); flat binaries (on MMU-less systems, or ARM, or M68k); scripts; the almighty binfmt_misc (see also What kinds of executable formats do the files under /proc/sys/fs/binfmt_misc/ allow?). These are what determine the types of executable files that the kernel can execute. binfmt_misc in particular allows many other binaries to be handled by the kernel (at least, from the perspective of the process calling one of the exec functions). However this doesn’t cover the whole story, since the C library and shells themselves are also involved. POSIX now requires that the execlp and execvp functions, when they encounter an executable which the kernel can’t run, try running it using a shell; see the rationale here for details. The way all this interacts to provide the behaviour you’re seeing is detailed in What exactly happens when I execute a file in my shell? and Which shell interpreter runs a script with no shebang?
What types of executable files exist on Linux?
1,356,038,512,000
I'm following the course of Baking Pi – Operating Systems Development. In it they created another section .init. So can we create as many sections as we want (not just .data, .bss, .text) and can we put code and data (initialized of no) in any of them?. If so, what's the purpose of sections then?
Initial research At first sight it would appear that the answer would be "no" the specification for ELF only allows the following sections. C32/kernel/bin/.process.o architecture: i386, flags 0x00000011: HAS_RELOC, HAS_SYMS start address 0x00000000 Sections: Idx Name Size VMA LMA File off Algn 0 .text 00000333 00000000 00000000 00000040 2**4 CONTENTS, ALLOC, LOAD, RELOC, READONLY, CODE 1 .data 00000050 00000000 00000000 00000380 2**5 CONTENTS, ALLOC, LOAD, DATA 2 .bss 00000000 00000000 00000000 000003d0 2**2 ALLOC 3 .note 00000014 00000000 00000000 000003d0 2**0 CONTENTS, READONLY 4 .stab 000020e8 00000000 00000000 000003e4 2**2 CONTENTS, RELOC, READONLY, DEBUGGING 5 .stabstr 00008f17 00000000 00000000 000024cc 2**0 CONTENTS, READONLY, DEBUGGING 6 .rodata 000001e4 00000000 00000000 0000b400 2**5 CONTENTS, ALLOC, LOAD, READONLY, DATA 7 .comment 00000023 00000000 00000000 0000b5e4 2**0 CONTENTS, READONLY Source: http://wiki.osdev.org/ELF Other sources such as Wikipedia also show only the most basic section names, leading you to believe that these are all that are allowed. Additional searching showed that there are these 2 sections as well: .fini This section holds executable instructions that contribute to the process termination code. That is, when a program exits normally, the system arranges to execute the code in this section. .init This section holds executable instructions that contribute to the process initialization code. That is, when a program starts to run the system arranges to execute the code in this section before the main program entry point (called main in C programs). The .init and .fini sections have a special purpose. If a function is placed in the .init section, the system will execute it before the main function. Also the functions placed in the .fini section will be executed by the system after the main function returns. This feature is utilized by compilers to implement global constructors and destructors in C++. Source: http://l4u-00.jinr.ru/usoft/WWW/www_debian.org/Documentation/elf/node3.html But, yes you can have any sections But thanks to @AProgrammer for pointing me to the actual ELF Specification v1.2, there's a paragraph on page 1-16 which states the following: Section names with a dot (.) prefix are reserved for the system, although applications may use these sections if their existing meanings are satisfactory. Applications may use names without the prefix to avoid conflicts with system sections. The object file format lets one define sections not in the list above. An object file may have more than one section with the same name. So it would appear that it's entirely up to the program what sections it wants to utilize.
How many sections can I create in object file?
1,356,038,512,000
I have a binary that runs on my Debian Squeeze system, but then it doesn't do anything on my Debian Wheezy (kernel Linux 3.2.0-4-amd64) system. Both systems are 64 bit, while the executable is a 32 bit binary. Here's the output of: me@myhost:~$ file myApp.run myApp.run: ELF 32-bit LSB executable, Intel 80386, version 1 (GNU/Linux), statically linked, stripped How do I go about troubleshooting this? I get no output whatsoever, it just returns immediately. Running the binary with strace: chadmichael@heraclitus: ~/dir$ sudo strace ./myApp.run execve("./myApp.run", ["./myApp"...], [/* 17 vars */]) = 0 [ Process PID=24457 runs in 32 bit mode. ] old_mmap(0xc6d000, 4096, PROT_READ|PROT_WRITE|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0xc6d000) = 0xc6d000 readlink("/proc/self/exe", "/dir/myApp.run.run", 4096) = 129 old_mmap(0x8048000, 1108297, PROT_READ|PROT_WRITE|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x8048000 mprotect(0x8048000, 1108294, PROT_READ|PROT_EXEC) = 0 old_mmap(0x8157000, 42979, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0x10f000) = 0x8157000 mprotect(0x8157000, 42976, PROT_READ|PROT_WRITE) = 0 old_mmap(0x8162000, 15736, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x8162000 brk(0x8166000) = 0x866e000 open("/lib/ld-linux.so.2", O_RDONLY) = -1 ENOENT (No such file or directory) _exit(127) = ?
The 64-bit O/S does not have the 32-bit libraries installed. apt-get update; apt-get upgrade; apt-get install ia32-libs This will provide the missing /lib/ld-linux.so.2.
Why won't my binary run?
1,356,038,512,000
I looked at this scheme and now i want to know, can one executable be runned in a two systems, which have the same ancestor? (and so probably the same kernel?) For example, according to the scheme: Solaris <- System V R4 <- BSD 4.3, so, can the BSD* (OpenBSD, FreeBSD, NetBSD) and the Solaris run the same executable? P. S. may be this question is obvious and meanigless to you, but i am completly new to the *nix, so for me it is important.
Short answer: No. Medium answer: Maybe, if the target OS supports it. Long answer... First thing to be aware of is that different vendors may use different chip sets. So a Solaris binary may be compiled for a SPARC chip. This won't run on an Intel/AMD machine. Similarly AIX may be on a PowerPC. HP-UX might be on PA-RISC. Let's ignore all these problems and just stick with the "Intel/AMD" space. The next problem is that different OSes may expose different kernel system calls. This means that any call the application makes into the kernel won't do what is expected. This is obviously a problem. However the target kernel may be able to provide an "ABI compatibility layer"; the kernel (let's say a FreeBSD kernel) can detect you are trying to run a Linux binary and can translate between the Linux kernel ABI and the native kernel ABI. The next problem is one of libraries; a Linux binary would expect to be able to load glibc of a specific version, which may not be present in the hosting OS. This may be solvable by copying the required libraries over. Again an OS may make this easier for you, e.g. by having a package for these libraries to make them easy to install. After all this your binary may run :-) Back in the 90s, Linux had a iBCS module which allowed for exactly this sort of thing. It made it possible to run, for example, SCO Unix programs on Linux. I had run SCO Unix Oracle on my machine as a proof of concept. It worked pretty well! Obviously there was no vendor support, so it wasn't suitable for production :-) Now Linux has a massive foothold in this space other OSes try and add compatibility layers to allow Linux programs to run on their OSes. So if your OS supports is and if you install and configure it properly then you may be able to run some programs from another Unix.
*nix executable compatibility
1,356,038,512,000
Say I have a somecommands.sh and I use chmod 777 to make it executable. Why I have to type ./scommand.sh instead of typing just somecommand.sh to run it from its own directory? Does this make any sense? If it's able to omit, what should I do? Thanks
You have to likely do that, because your current path (pwd) is not in your search path for executable files. Type this in your console: echo $PATH | tr ':' '\n' Every folder that is printed, is in the search path for executable files (in that order). Now, if you want to run a file from a different directory, you have to supply the full (relative or absolute) path. ./script.sh means the current directory (./, relative to where you are) and the the filename (script.sh). You can equally well use the full path (starting from the root folder /) to your file (for instance /home/guo/script.sh, if hat's your username, and when your file is in your home directory). As a tip, if you regularly use that file, I suggest making a local /bin directory (~/bin, as inside your home directory) and then put export PATH="$HOME/bin:$PATH" into your .bashrc, for instance. And then put your scripts into that directory. Another thing: I suggest not using 777 as permissions to run it. Instead I suggest 755, so only you have permission to overwrite the file. If you want to make a file executable just use chmod +x script.sh, it'll usually do what you want. In a similar fashion as I've described above, it is possible to add the "current" directory, to path: export PATH=".:$PATH", but this is not advisable. I strongly advise, using a private directory (~/bin) for those use cases.
Why when we run a executable file we need to add ./ ahead? [duplicate]
1,356,038,512,000
How do I change the permissions of an executable file to access the /etc/shadow file? So far I have the following bash script: #!/bin/bash gcc print.c -o print chmod +s print ./print exit 0 and the following c-code: #include <stdio.h> #include <stdlib.h> int main() { FILE *open = fopen("/etc/shadow", "r"); int tmp; do { tmp = fgetc (open); printf("%c", tmp); } while (tmp != EOF); fclose(open); return 0; } I can easily print the /etc/passwd file, but I get a dumped core once I try to access the /etc/shadow file.
To give a binary permission to run things as root, you need to set the "sticky bit" on the binary. Normally after compiling, you might see: # ls -l print -rwxr-xr-x 1 mark mark 111 24 Oct 17:32 print Setting the set-uid (sticky) bit can be done using and octal mode, or symbolically (note that you will need "root" privileges in order to change the ownership of a file): # chown root print # chmod o-x print # chmod u+s print # ls -l print -rwsr-xr-- 1 root mark 111 24 Oct 17:32 print In the first version, the s in the permissions, as you already figured out, indicates that this is both executable and "set-uid". But you have to changed the ownership of the file also, so that "set-uid" sets the uid of root rather than your own user. At this point, the "group" hasn't changed its value, but that's not important in this particular case. (Though it might be a factor for security.) The final line above shows permissions that can also be expressed as an octal number, so if this is the result you want, then you could replace the two chmod lines above with a single one: # chmod 4754 print Have a look at the man page for chmod for more details. If this isn't what you're looking for, please clarify your requirements in your question. IMPORTANT NOTE: the /etc/shadow file is kept private for a reason. If you expose it with something that can be run by other users, you may compromise the security of your system. Removing world executable permission is a "nod" towards security, but if you feel that you need to expose /etc/shadow in this way, you may be solving the wrong problem.
print the /etc/shadow file in the console
1,356,038,512,000
I'd like to override some hardcoded paths stored in pre-compiled executables like "/usr/share/nmap/" and redirect them to another dir. My ideal solution should not require root priviledges, so creating a symlink is not ok. (Also recompiling it's not an option)
I've just found this ptrace-based chroot reimplementation: PRoot. The bind function is just what i was looking for! This is more reliable than replacing strings in the executable + can be easily used in scripts...
override hardcoded paths in executables
1,356,038,512,000
I have Crunchbang 64 bit, a debian wheezy distro. Debian have a rebranded Firefox called Iceweasel and a rebranded Thunderbird called Icedove. But I don't want any of them. I was able to install the latest version of Firefox by adding Linux Mint repo, and installing it from there. I did the same and installed Thunderbird, but it tells me that I'm not using the latest version, I need to download it from their site. I downloaded it from the site and when i run sudo sh run-mozilla.sh I get run-mozilla.sh: Cannot execute I tried different commands, none worked. Do I need to chmod it?
First of all, don't add mint repositories to Debian, not a good idea. Mint is based on Ubuntu which, while based on Debian, is not 100% compatible with Debian repositories. Mixing them is likely to cause trouble. Instead, add LMDE (Linux Mint Debian Edition) repositories. LMDE is Debian and is 100% compatible with the Debian repos. As long as you're running Debian testing, that should work with no problems. Second, as others have pointed out, this is really truly not worth the effort. Anyway, the error you get is actually run-mozilla.sh: Cannot execute . The . is important, it shows that the script expects an argument and since you don't give it one, it takes the current directory. The script is not an installer and is usually called by another script, not directly. To install the thunderbird binaries, follow the instructions here: wget 'http://releases.mozilla.org/pub/mozilla.org/thunderbird/releases/3.1.4/linux-i686/en-US/thunderbird-3.1.4.tar.bz2 -O- | ' sudo tar xj -C /opt && sudo ln -s /opt/thunderbird/thunderbird /usr/bin/thunderbird However, this will install the 32bit thunderbird which won't work on a 64bit system unless you have multiarch installed. You will also need to bring in the dependencies manually. Please don't do this, either install the deb from LMDE or just use icedove which is thunderbird with different icons.
How to install the real Thunderbird on Debian wheezy?
1,356,038,512,000
I'm trying to execute a compiled Lazarus file which was working on macOS 10.14.x. After updating to 10.15, I started to get an error, "Bad CPU type in executable", which as far as I understand means that it is no longer compatible. ./myScript ->>>>>>>>>>>>>>> bad CPU type in executable file myScript ->>>>>>>>>>>>>>> Mach-O executable i386 uname -a ->>>>>>>>>>>>>>> Darwin-MacBook-Air.local 19.0.0 Darwin Kernel Version 19.0.0: Thu Oct 17 16:17:15 PDT 2019; root:xnu-6153.41.3~29/RELEASE_X86_64 x86_64 uname -p ->>>>>>>>>>>>>>> i386 I wonder why this executable causes this error while it is i386 which had to be compatible with this version? Is there any way to run it on macOS 10.15.x? Or is the only way to build it again with different, compatible build settings? (This is not yet supported by Lazarus.)
macOS Catalina (10.15) dropped support for 32-bit executables, which is why your executable no longer works. The ideal solution is to build a 64-bit binary. The Lazarus wiki describes how to do this: target x86_64, use Cocoa widgets, and build with fpc rather than ppc386.
Executing "Bad CPU type" executables in 10.15.x
1,356,038,512,000
Multiple Linux distros can be installed on the same machine. The format of executables should be the same for every one of them. So I want to use multiple distros on a single machine and have access to some applications like Skype, Chrome or Spotify from all. I don't want to waste time and disk space to install them on distros separately. I want to use only modern distros, like Ubuntu, Mint, Fedora, Solus, Manjaro, etc. man hier expresses /usr/local as a folder is where programs which are local to the site typically go. I can make a separate partition with the contents of /usr/local and mount it on every distro. Please tell me if this is the appropriate folder. After mounting the appropriate folder: what is required for executables to be shared among distros (permissions, UID, GID)? will it be possible to install a piece of software on one distro and run it on another? will upgrade of a package on one distro be visible on the other distros? will removal of the package be visible on other distros? should I mount more folders, like /home, /usr/share/games, /usr/share/locale, /usr/bin, /lib, /opt or /var? how about flatpak and Appimages?
As far as I can understand you want to have a multi-boot system where you can boot a different Linux distros at a time, but keeping the same binaries in a shared partition. What you want to do isn't feasible, and if you were trying to do it, you would be spending 10x more time compared to managing each distro separately. You're opening a gigantic can of worms here. will it be possible to install a piece of software on one distro and run it on another? In general, no. Linux executables are compiled differently for each distro. They depend on specific versions of specific installed libraries. You might be able to run on all distros a generic software binary that is not distro-specific, but even a program that doesn't use external libraries and relies on the kernel only would depend on the kernel version, which is very different from distro to distro (e.g. Fedora uses a kernel version which is much more advanced than RHEL's or Ubuntu). Not to mention that the same kernel version might be build with different config options depending on the distro. what is required for executables to be shared among distros (permissions, UID, GID)? This is too broad a question. The same package in two different distros might e.g need to run under different users. So, requirements are different depending on the software and on the distro. will upgrade of a package on one distro be visible on the other distros? will removal of the package be visible on other distros? No, each distro has its own Package Manager and package format, and are not compatible with each other (see also this question: Why isn't there a truly unified package manager for Linux?). Trying to mix them up will make a mess. And compiling each software from source to avoid dealing with Package Managers means opening another can of worms. Concerning your questions about mountpoints and sharing partitions, note that's tricky enough just sharing the /home partition between different distros, as seen in this question: Different linux distros sharing the same /home folder?
Different Linux distros sharing applications
1,356,038,512,000
Using pipes, one can create files with simple shell built-ins. { echo "#!/bin/bash" \ echo "echo Hello, World!" \ } > helloworld.sh With chmod these can then be made executable. $ chmod 755 helloworld.sh $ ./helloworld.sh Hello, World! I wonder whether it is possible to save the chmod step. Already, I found that umask cannot do the job. But perhaps someone knows an environment variable, bash trick, program to pipe through or other neat way to do it. Is it possible to have the file created with the executable bit already set?
It is not possible to create an executable file solely with a shell redirection operator. There is no portable way, and there is no way in bash either (in the source code, you can see that redirection calls do_redirection_internal which calls redir_open with the parameter mode set to 0666, and this in turn calls open with this mode). You're calling a shell command anyway, so add ; chmod +x … somewhere in it. There's absolutely nothing wrong with that. One more line of code is not a problem. You need to do three things (create a file with some given content, make the file executable, execute it), so write three lines. There is a relatively obscure shell command that can create an executable file with some specified content: uudecode. But I would not recommend using it: it requires the input to be passed in a non-readable format, it bypasses the user's umask, and it's obscure. A sane alternative is to call bash /the/script instead of chmod +x /the/script && chmod +x, if you know what interpreter to execute the file with.
create executable files via piping
1,356,038,512,000
Suppose I have a shell script file foo.sh. I can do chmod + x foo.sh and change it into an executable file. In Kernighan-Pike: Unix Programming Environment (UP) they show that after this typing foo should execute the script. Instead in my Ubuntu system I need to type sh foo or ./foo. I am guessing this is due to some feature of the shell that wasn't present earlier (when UP was written). I will appreciate if some can enlighten me why this difference exists and why is it important?
The name of the executable is important. If your file is named foo.sh, then executing foo will not work unless there is some other executable named foo. Unlike Windows, Unix does not do implicit file extensions. If the following works: ./foo.sh But this doesn't: foo.sh That means that the file is in your current directory and your current directory is not in your PATH. For your protection, if you don't explicitly provide the path to a command, the shell will only look for the command among files that are in your PATH. Spaces are important. The following may work: ./foo.sh But, this (taken from the first version of this question) certainly will not work: ./ foo.sh With a space between ./ and foo.sh, the shell will think that you want to execute the current directory with foo.sh as an argument. This will generate an error message.
About executing shell script
1,356,038,512,000
Coming from a windows platform I am a bit confused over how compressed files are installed in Linux . I am using fedora 20. Now I downloaded FoxIt pdf reader from here. I also read this post which explains what to do with compressed files.However I am still confused as to what to do when a the bz2 file is uncompressed. The read me file states For Tar package installation, please note that the "fpdfcjk.bin" file has to be put into the same directory where the "FoxitReader" file is and also your system has to support displaying Chinese, Korean and Japanese normally, so that PDF files containing Chinese/Japanese/Korean fonts can be properly displayed. This is what I get [op@localhost Downloads]$ ls 1.1-release FoxitReader-1.1.0.tar.bz2 [op@localhost Downloads]$ cd 1.1-release/ [op@localhost 1.1-release]$ ls FoxitReader fpdfcjk.bin fum.fhd po Readme.txt [op@localhost 1.1-release]$ ./FoxitReader bash: ./FoxitReader: /lib/ld-linux.so.2: bad ELF interpreter: No such file or directory [op@localhost 1.1-release]$ Any suggestions on what I should be doing when a bz2 file is extracted ? I also know i could probably download this through yum but I would really like to do this the way of extracting a compressed file.Any suggestions on tackling this problem would be appreciated.
That error (likely) means you are trying to run a 32-bit executable on a 64-bit system. I'll answer the specific issue here, but see the bottom of the answer for the better approach in general. You say have yum around, so this may help you: yum install lib/ld-linux.so.2 yum will try to find anything that provides that file and then install it. It should find glibc.i686, so you can jump right to that with: yum install glibc.i686 You may well find that you need other libraries too. This will be a "multilib" setup; you should look into what that will involve for your particular distribution. I also know i could probably download this through yum but I would really like to do this the way of extracting a compressed file. You will almost always be better off installing software with the package manager (that's what it's for!), rather than extracting random executables off the internet. Try to wean yourself off this approach in general - it's often just not going to work, and even when it does it's suboptimal. In this case in particular the software may not be in the package repository, so that option may not be available, but note that there is an "RPM" download option on the website you got it from. RPM is the package format used on your distribution. This will almost certainly be a better option than the tarball, so I suggest trying that instead. Install that file with: rpm -ivh foxit.rpm substituting your own filename. The package manager will be able to give you more information and help you more, even though it wasn't from one of the distribution repositories.
Installing a bz2 file
1,356,038,512,000
Having downloaded archives of Sage, Firefox, and Thunderbird from their respective websites on Trisquel 6.0, neither clicking the appropriate shell script nor executing a command to run the programs will open them. Looking at the files' permissions, they are indeed executable. Edit: Firefox and Thunderbird have now mysteriously started working; Sage, however, remains inoperable. Having downloaded it from the first link on this page, extracted it to my desktop (where Firefox and Thunderbird run just fine), opened the folder in the terminal and run the command "sage" (as there is an executable by that name in the folder) I receive the error: bash: sage: command not found. Yet ls -l shows that the files are known to exist and be executable. Edit #2: I only thought Firefox and Thunderbird were working. I was actually running the versions I already have installed. Running Firefox and/or Thunderbird with ./[executable] gives the error libxul.so: cannot open shared object file: No such file or directory Couldn't load XPCOM.
The first thing to check after extracting a program from an archive is the permissions (chmod a+x ./sage), but if that was the problem, the error message would be “permission denied” and not “command not found”. Give your description “opened the folder in the terminal and run the command”, it's likely that you ran the command sage expecting to execute the program with this name in the current directory. Unix doesn't work like this: the shell only looks for programs in the directories listed in the PATH environment variable. It doesn't search the current directory implicitly first. To run a program in the current directory, you need to type its path: ./sage If you want to run the program without specifying a path, you need to install it in a directory in your $PATH (typically /usr/local/bin or ~/bin). It's often convenient to leave the executable with the other files from the application and make a symbolic link to it in a directory in $PATH: ln -s /path/to/sage-5.9/sage ~/bin/ or if you're already in the directory containing the sage binary: ln -s $PWD/sage ~/bin/ If you've just added a program to a directory in your PATH and your shell still complains that the command is not found, it may be because you tried before and your shell has kept the “not found” information in a cache. Run hash -r to rebuild the cache and try again. The next time you start a shell, this won't be a problem any more, because the cache isn't saved between runs of the shell. If you execute a file that is present, with a specified path, and you get the “command not found” error message, it may be because you don't have the right loader. This can happen if you downloaded a binary that is supported by your CPU and kernel but you don't have the required userland support (no libraries). This can also happen if the program is a script whose shebang line refers to an interpreter that isn't present on your system (though typical shells give a “bad interpreter” message rather than “command not found” in this case).
Why can't I run programs extracted from an archive?
1,356,038,512,000
Does execution of a file need read permission? It is natural to think yes, because execution of a file needs to load the file into memory. If the answer is no, why is that? In particular, same question when the file is a directory? Thanks.
When you execute a file, in many cases you don’t need to read it, so you don’t need read permission. You’re right, the system needs to read it on your behalf, but that’s not defined as requiring the read permission (because nothing running as you ever needs access to the file’s contents). The exception is any circumstance where executing a file involves reading it, by a process running with your credentials. So shell scripts, in fact scripts in general, require read permission, as would any executable handled by binfmt_misc. Likewise, accessing a directory doesn’t involve reading it: you can enter a directory without listing its contents. Think of this as exploring a building with a blindfold: execute/search permission allows you to unlock doors to change rooms (as long as you already know where the doors are), read permission allows you to remove the blindfold to see what’s in the room.
Does execution of a file need read permission?
1,356,038,512,000
in my $PATH I have folder ~/.zsh/bin which I use for small scripts and custom built executable binaries, for example I added a recently compiled tool I made called wercker_build_status to the folder. Yet when I type in the command line wercker_build_status it can't find it, I have to type the full path to the file, ~/.zsh/bin/wercker_build_status. That's not to say nothing in the folder doesn't work, a script I have called wifi_status is in there and typing that into the command line returns the wifi status as expected. Why is it even though it's in my $PATH I can't just use a file I add to the folder ~/.zsh/bin?
Use $HOME in your path rather than tilde (~), especially if you enclose the new PATH in double quotes. The tilde is not expanded when it occurs in quotes. Testing: $ mkdir "$HOME/t" $ cat >"$HOME/t/foo" <<END #!/bin/sh echo "hello" END $ chmod +x "$HOME/t/foo" $ PATH="$PATH:~/t" $ foo zsh: command not found: foo $ PATH="$PATH:$HOME/t" $ foo hello See also: Why doesn't the tilde (~) expand inside double quotes?
binary placed in folder on $PATH is not immediately accessible
1,356,038,512,000
Situation Some programs that I build from source have a directory libexec in the installation directory (for example, gnuplot). As a matter of rule, I add a export LD_LIBRARY_PATH=${installation directory}lib:${LD_LIBRARY_PATH} to my .bashrc when I have lib folders. Likewise with PATH and PKG_CONFIG_PATH if ${insdir}/bin and ${insdir}/lib/pkgconfig exist. I have developed this practice based on the many indications to do so gathered with usage. I can see that the files contained in libexec are binary executables. Questions What is their purpose in contrast to the executables stowed in bin? Should dedicated variables (in the guise of PATH, LD_LIBRARY_PATH, PKG_CONFIG_PATH) be set to make them known to the shell environment? If not, would PATH would do just as fine? Or perhaps there's no need ever to set anything because they are used by special programs that are content with a relative path? This topic is close to Portable binaries and the libexec path which addresses a similar point when creating libexec files in a package, though
libexec is intended for private binaries, i.e. binaries which are used by a program but which shouldn't be available generally. See the FHS: /usr/libexec includes internal binaries that are not intended to be executed directly by users or shell scripts. So no, you shouldn't add it to any PATH-style variables.
Should libexec folders be added to some PATH-like variables?
1,356,038,512,000
CentOS here, but I don't think that matters because this should be a core Linux question (methinks). While trying to install & run Apache Kafka (a Java executable) on a CentOS box, I thought of a question that applies to Linux in general. When you run a shell script or a native executable (such as java), does the script/executable dictate which user it runs as, or does the OS dictate which user the script/executable runs as (meaning, which ever user is executing the script/executable)? Is it possible and/or typical for processes to dictate which user they run as? Meaning can a script/application specify that it must run as root user, or as some other specific type of user? Either way, why is there a general admonishment about running processes as root vs running them as non-privileged users?
Short answer: both. Longer (and much more useful) answer: By default, the program will run as the user who launched it. However, a program can, if written to do so and given the correct permissions, assume root privileges and/or drop back down to a "system" user to run itself as. This ability must be explicitly bestowed on the program, though, either through the packaging and installation process or through actions taken by the administrator of that machine. The general admonishment is there because historical experience in UNIX and Linux has shown that quite often programs that use elevated (i.e. root) privileges that they do not need will often do bad things to the system. This can be from data corruption, to runaway processes that render the rest of the system unusable / unresponsive, to processes that unwittingly allow attackers access to your system in ways that you don't want them to.
What user does a Linux script/app run as?