date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,356,038,512,000
Before, I was using Linux Mint and I can run bash script directly from desktop just with chmod the script to executable. For example, I have a script like this: #!/bin/bash VBoxSDL --startvm virtualmachine then: chmod +x myscript In desktop. I just need to double click the script that I've created. Now, I'm using Arch Linux with gnome-shell. The same way doesn't work. Instead, double clicking only runs text editor. Right click, too, show only gvim editor to launch the script and open with other applications option which doesn't have run script directly. So, how can I launch the script directly from desktop environtment without terminal?
If that isn't working then you can create a .desktop file for your script. It would look something like this: # $Id: vbox-starter.desktop 22 $ [Desktop Entry] Name=Custom Virtualbox Starter GenericName=VBox Comment=VBox Exec=VBoxSDL --startvm virtualmachine Terminal=true Type=Application Icon=Virtualbox Categories=GNOME;GTK;Utility; Note that since your script contains only one single line, you can put that directly in the Exec value: Exec=VBoxSDL --startvm virtualmachine In case you, your script was very long with multiple lines of code, then just put it this way: Exec=~/Desktop/myscript
Executable script can't run directly from desktop in Arch Linux
1,356,038,512,000
I want to find out whether it is possible to circumvent an executable's gui by calling the programs internal worker methods/functions directly. Assuming the executables are C/C++ programs with a working Gnome or KDE interfaces on Linux.
This can be done in special cases. What you describe is something like dynamically loading plugins via the C dynamic linking loader: try man dlopen for details on that. Usually the code so called has to be compiled as "position independent", so you're almost certainly out of luck for any specific program. You could look at userland exec code for some hints on how this might be done, but it almost certainly won't work to do what you describe.
Is it possible to call an executable's functions?
1,356,038,512,000
I'm using Elementary OS. How do I mark a .jar file as executable. I do have JDK installed, but I get an error saying it isn't marked as executable.
You can either open a terminal and run chmod +x yourfile.jar or right-click the file in the file manager, open the Properties window, switch to the Permissions tab and check the Execute box in the Owner: row.1 1: At least that's the way it works in Nautilus/Files (GNOME's default file manager) which Pantheon's Files seems to be resembling quite closely. You could also avoid the need to mark it as executable by using the Java executable to invoke it: java -jar yourfile.jar
Mark as executable
1,356,038,512,000
I have a bunch of java command-line programs I've written and would like to install for all users. Building with NetBeans, I get a /dist directory which contains myprog1.jar and a /libs directory with all the necessary libraries. The usual method to run is to go there and java -jar my.jar -options. So what I'd like to do is just type myprog1 -options from anywhere and have it run. The method I have thought of is: Create /opt/myjava/myprog1 which contains myprog1.jar and its /libs. Create a bash script myprog1 in /usr/local/bin which simply redirects all of the command line args to java -jar /opt/myjava/myprog1/myprog1.jar I'm not too keen on bash scripting... if this is a reasonable method, what would that script look like, given that each program has a variable number and order of arguments? Does the script have to also worry about standard Unix bits such as output redirect > and pipes |?
You can write a single wrapper script that executes a jar named after the way it's called, and make one symbolic link for each jar. Here's the jar-wrapper script (warning, typed directly into the browser): #!/bin/sh name=$(basename "$0") jar= for dir in /opt/myjava/*; do if [ -e "$dir/$name.jar" ]; then jar=$dir/$name.jar; break; fi done if [ -z "$jar" ]; then echo 1>&2 "$name.jar not found" exit 126 fi exec /your/favorite/java -jar "$jar" "$@" Then create as many symbolic links to the wrapper script as you like, and put them in your $PATH if you want: ln -s wrapper-script myprog1 ln -s wrapper-script myprog2 If you are running Linux, and you are the system administrator, then you can select a Java interpreter to make jars directly executable, thanks to the binfmt_misc mechanism. For example, on my system: $ cat /proc/sys/fs/binfmt_misc/jar enabled interpreter /usr/lib/jvm/java-6-sun-1.6.0.07/jre/lib/jexec flags: offset 0 magic 504b0304 This system is documented in Documentation/binfmt_misc.txt in the Linux kernel documentation. To create an entry like the one above, run the command jexec=/usr/lib/jvm/java-6-sun-1.6.0.07/jre/lib/jexec echo >/proc/sys/fs/binfmt_misc/register ":jar:M:0:504b0304::$jexec:" Your distribution may have a mechanism in place for binfmt registration at boot time. On Debian and derivatives, this is update-binfmts, and the JVM packages already register jexec. If you need to pass options, register a wrapper script that adds the options instead of jexec directly.
How do I install multiple java command-line programs?
1,356,038,512,000
I made fish function in ~/.config/fish/functions/confgit.fish: function confgit /home/john/Projects/confgit $argv end But when I run this function it just says: fish: The file “/home/john/Projects/./confgit” is not executable by this user /home/john/Projects/./confgit $argv ^ in function 'confgit' The config file is normal python script. If I run it by ./confgit it runs fine. There are the permissions of the script: -rwxr-xr-x 1 john john 5.8K 29. nov 02.04 confgit* How can I fix this so I can use this function ? Thank you for help
I worked to reproduce your problem, and the closest thing I could emulate was this: # file: ~/bin/janstest echo $argv # file: ~/bin/janstest2 function janstest ~/bin/janstest $argv end janstest It works! and file permissions as: stew@stewbian ~> ls -l ~/bin/jans* -rwxr-xr-x /home/stew/bin/janstest* -rwxr-xr-x /home/stew/bin/janstest2* When I run it I get a similar error: stew@stewbian ~> ~/bin/janstest2 Failed to execute process '/home/stew/bin/janstest2'. Reason: exec: Exec format error The file '/home/stew/bin/janstest2' is marked as an executable but could not be run by the operating system. stew@stewbian ~ [125]> The solution was to prepend #!/usr/bin/fish to the the script. stew@stewbian ~> cat ~/bin/janstest2 #!/usr/bin/fish function janstest ~/bin/janstest $argv end janstest It works stew@stewbian ~> ~/bin/janstest2 It works
Fish: The file is not executable by this user
1,356,038,512,000
How to create an executable shared-library using cmake? Something like: libtest.so :: linkable shared library libtest.so :: executable too Note: gcc/g++ options are known to achieve the same (https://unix.stackexchange.com/a/223411/152034). But the solution needs cmake way
NOTE: First information regarding this is, there is seemingly an open issue related to cmake. Therefore this can be considered as an indirect solution to achieve the same. Now follow the illustration using cmake. test.cpp #include <stdio.h> void sayHello (char *tag) { printf("%s: Hello!\n", tag); } int main (int argc, char *argv[]) { sayHello(argv[0]); return 0; } ttest/test_test.cpp #include <stdio.h> extern void sayHello (char*); int main (int argc, char *argv[]) { printf("\nNow Inside test-test !\n"); sayHello(argv[0]); return 0; } CMakeLists.txt cmake_minimum_required(VERSION 3.5) project(pie_test) #shared-lib as executable add_library(${PROJECT_NAME} SHARED test.cpp ) target_compile_options(${PROJECT_NAME} PUBLIC "-pie") target_link_libraries(${PROJECT_NAME} "-pie -Wl,-E") set_property(TARGET ${PROJECT_NAME} PROPERTY POSITION_INDEPENDENT_CODE 1) #executable linking to the executable-shared-library add_executable(test_test ttest/test_test.cpp ) target_link_libraries(test_test pie_test) set_property(TARGET test_test PROPERTY POSITION_INDEPENDENT_CODE 1) build.sh #!/bin/bash rm -rf build mkdir build cd build cmake .. #--debug-output make VERBOSE=1 echo "Done!" echo "" Reference for gcc-options here.
Building shared library which is executable and linkable using Cmake
1,356,038,512,000
I downloaded Singular, from the terminal. I simply can't find it now that it's installed! It's not in my installed applications and the command "singular" on the terminal gives nothing. How do I start it?
Try Singular with upper case S. As you can see in the list of files installed be this package here, the executable is /usr/bin/Singular. The same with different architectures.
Installed "singular" and can't start it
1,356,038,512,000
I was looking at Linux audit reports. Here is a log from ausearch. time->Mon Nov 23 12:30:30 2015 type=PROCTITLE msg=audit(1448281830.422:222556): proctitle=6D616E006175736561726368 type=SYSCALL msg=audit(1448281830.422:222556): arch=c000003e syscall=56 success=yes exit=844 a0=1200011 a1=0 a2=0 a3=7f34afa999d0 items=0 ppid=830 pid=838 auid=1001 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts0 ses=1 comm="nroff" exe="/usr/bin/bash" subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 key=(null) From my understanding the comm argument is the name the user used to invoke the exe binary. How come is nroff referring to /usr/bin/bash? Note that this is a general question, I have seen this kind of thing, which I cannot explain, happen many times. In this particular case, here is more data about nroff and bash on my system. [root@localhost ~]# which nroff /bin/nroff [root@localhost ~]# ll -i /bin/nroff 656858 -rwxr-xr-x. 1 root root 3312 Jun 17 10:59 /bin/nroff [root@localhost ~]# ll -i /usr/bin/bash 656465 -rwxr-xr-x. 1 root root 1071992 Aug 18 13:37 /usr/bin/bash
The nroff "executable" provided by groff is a shell script, e.g., #! /bin/sh # Emulate nroff with groff. # # Copyright (C) 1992, 1993, 1994, 1999, 2000, 2001, 2002, 2003, # 2004, 2005, 2007, 2009 # Free Software Foundation, Inc. # # Written by James Clark, maintained by Werner Lemberg. # This file is of `groff'. Depending on the system you are using, /bin/sh may be a symbolic link to /usr/bin/bash, e.g., Fedora, which links /bin to /usr/bin.
auditctl comm vs. exe
1,356,038,512,000
I currently am using Xamarin Studio, which has a bug in this version. It adds 2 parameters to an executable, which causes the output to flood with error messages, slowing down the build time from a minute to at least 10 minutes. Is there a way I can move the original executable and create a bash script or a link, which removes the 2 offending parameters, and put it in its place? So Xamarin would run the command as usual, but the 2 offending parameters wouldn't be passed to the original command. say it's /usr/bin/ibtool --errors --warnings --notices --output-format xml1 --minimum-deployment-target 7.0 --target-device iphone --target-device ipad --auto-activate-custom-fonts --sdk iPhoneSimulator9.0.sdk --compilation-directory Main.storyboard, I'd like to: Move ibtool to ibtool_orig Put a link or script in place of ibtool, which removes the offending parameters and passes it along to ibtool_orig , giving me the following command: /usr/bin/ibtool_orig --errors --output-format xml1 --minimum-deployment-target 7.0 --target-device iphone --target-device ipad --auto-activate-custom-fonts --sdk iPhoneSimulator9.0.sdk --compilation-directory Main.storyboard (notice that ibtool is now ibtool_orig and --errors --warningsis gone) Any ideas?
The canonical way is a loop shaped like: #! /bin/sh - for i do # loop over the positional parameters case $i in --notices|--warnings) ;; *) set -- "$@" "$i" # append to the end of the positional parameter # list if neither --notices nor --warnings esac shift # remove from the head of the positional parameter list done exec "${0}_orig" "$@" You can also replace #! /bin/sh - with the ksh, zsh, yash or bash path and replace exec with exec -a "$0" so ibtool_orig be passed /path/to/ibtool as argv[0] (which it might use in its error messages or to reexecute itself).
link to an executable and remove some parameters
1,356,038,512,000
I have this desktop entry: [Desktop Entry] Name=dummy Type=Application Terminal=false Icon=/home/xyz/Software/Test/ico.png Exec= /home/xyz/Software/Test/start which is supposed to execute file containing: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:./Linux/lib exec ./foo --gc=sgen I have tried creating symlink, but result is the same - it does nothing. When I double click the file in it's folder, it gives me prompt like this: After I click Run it runs fine, but when executing from desktop... I've tried exporting this path to PATH, but when running foo, it can't find some library even tough it should... Also path is 100% correct, because icon is appearing as it should. What I'm trying to do, is creating working desktop shortcut of start file, or foo file (which won't execute without error for some reason, I have added it's path to PATH, maybe missing argument '--gc=sgen' when executing ?) Any help will be greatly appreciated!
The problem is that you're using relative paths in the script: ./Linux/lib, ./foo. These paths are relative to the current directory. The current directory of the process running the script is the current directory of whatever process launched it; it has nothing to do with the location of the script. When you run the script by clicking a desktop icon, the current directory is your home directory. One solution is to add a cd command in the script, to change to the directory where the application is installed. #!/bin/sh cd /home/xyz/Software/Test/ export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:./Linux/lib" exec ./foo --gc=sgen But it would be more useful to not change the current directory, and instead use absolute paths. This way you can use the script to open files in the current directory, for example. While I'm at it, I added "$@" to the invocation of foo, which passes the arguments on the script's command line on to the application. #!/bin/sh export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/home/xyz/Software/Test/Linux/lib" exec /home/xyz/Software/Test/foo --gc=sgen "$@" If the script is located in the application directory, you can make it detect its own location. $0 is the path to the script. ${0%/*} is the path to the script with everything after the last slash stripped off, i.e. the path to the directory containing the script. #!/bin/sh foo_directory="${0%/*}" export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$foo_directory/lib" exec "$foo_directory/foo" --gc=sgen "$@" Beware that if LD_LIBRARY_PATH is initially empty, you're adding the current directory, which may not be a good idea. You should test it. #!/bin/sh foo_directory="${0%/*}" if [ -n "$LD_LIBRARY_PATH" ]; then export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$foo_directory/lib" else export LD_LIBRARY_PATH="$foo_directory/lib" fi exec "$foo_directory/foo" --gc=sgen "$@" or (assuming you don't use empty entries in LD_LIBRARY_PATH, which is a sane choice) #!/bin/sh foo_directory="${0%/*}" export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$foo_directory/lib" LD_LIBRARY_PATH="${LD_LIBRARY_PATH#:}" exec "$foo_directory/foo" --gc=sgen "$@"
*.desktop nor symlink works (just for this one file) - Linux Mint 17.2 Cinnamon
1,356,038,512,000
I was reading the man page of chown. I don't understand why S_ISUID and S_ISGID mode should be cleared when the function returns successfully.
I think you're pointing to this from the man page: When the owner or group of an executable file are changed by an unprivileged user the S_ISUID and S_ISGID mode bits are cleared. So why are they cleared now. You see they are only cleared in case of an executable file. Because when one of the bits (SUID/SGID) is set, the unprivileged user can execute the file as the new owner of the file. That would be a huge security breach.
Why the S_ISUID and S_ISGID mode bits got cleared when the owner or group of an executable file are changed by an unprivileged user
1,356,038,512,000
I downloaded the Linux executable for Unetbootin 494, and now I'm trying to run it. As root, I made it executable and attempted to execute it: chmod +x unetbootin-linux-494 ./unetbootin-linux-494 Nothing happens and no output is displayed. ps -e | grep unetbootin shows nothing either. The file's size looks right (4.3 MB), although I don't see a checksum on sourceforge with which to verify it. I'm running it on my /home partition (as root, though), so the filesystem isn't non-executable. How can I execute this file, or at least debug the problem further? I'm using Debian x64.
Short answer: Installing ia32-libs and ia32-libs-gtk should fix the problem. The problem was pretty basic: running a 32-bit executable on a 64-bit system without the proper libraries doesn't work. Longer answer: My initial post might have been too hasty, but since I had a minor amount of difficulty finding a solution, I might as well answer. I ran strace ./unetbootin-linux-494, which tells me: execve("./unetbootin-linux-494", ["./unetbootin-linux-494"], [/* 33 vars */]) = 0 [ Process PID=5369 runs in 32 bit mode. ] old_mmap(0x1020000, 4096, PROT_READ|PROT_WRITE|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0x1020000) = 0x1020000 readlink("/proc/self/exe", "/home/jb/Downloads/unetbootin-linux-494", 4096) = 43 old_mmap(0x8048000, 10891295, PROT_READ|PROT_WRITE|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x8048000 mprotect(0x8048000, 10891292, PROT_READ|PROT_EXEC) = 0 old_mmap(0x8aac000, 124071, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0xa63000) = 0x8aac000 mprotect(0x8aac000, 124068, PROT_READ|PROT_WRITE) = 0 old_mmap(0x8acb000, 4436, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x8acb000 brk(0x8acd000) = 0x8bfc000 open("/lib/ld-linux.so.2", O_RDONLY) = -1 ENOENT (No such file or directory) _exit(127) = ? Clearly the problem is that the ld-linux.so.2 object doesn't exist on my system. Since that object is part of ia32-libs, I installed that package. However, that isn't enough, because I then received this error: unetbootin-linux-494: error while loading shared libraries: libgthread-2.0.so.0: cannot open shared object file: No such file or directory According to this bug report, however, the problem is because the ia32-libs-gtk package needs to be installed as well. Once I installed that, the executable ran normally.
What to do with an executable file that simply doesn't execute?
1,356,038,512,000
I wrote the following #!/bin/bash cd ~/bin/red5-1.0.0 gnome-terminal --working-directory=. -x red5.sh red5.sh is the script to run (this is java written media server). My script above opens new terminal but with error message There was an error creating the child process for this terminal Failed to execute child process "red5.sh" (No such file or directory) What can be the cause? I am at Ubuntu 11.10
The working directory does not affect your $PATH1, thus I guess what's happening can be understood if you do the same thing in a terminal, i.e. $ cd ~/bin/red5-1.0.0 $ red5.sh will not work either; what does work is one of the following: $ cd ~/bin/red5-1.0.0 $ ./red5.sh # note the relative path to the script or $ cd ~/bin/red5-1.0.0 $ export PATH=~/bin/red5-1.0.0:$PATH # add the path to $PATH which is where $ red5.sh # the shell looks for red5.sh So, guessing that gnome-terminal works similar (regarding where it looks for executables), you could probably alter your script in one of these ways, too. 1: If your $PATH does not contain ., as Kevin pointed out (How about other relative paths, btw?).
How to create a script, which runs another script in separate terminal window and does not wait?
1,356,038,512,000
I have a directory in which I am storing all my shell scripts and I would like new files to be made executable by default so that I don't have to go chmod u+x [file] everytime. Is there a way to make this happen. I tried chmod -R u+x [directory] but this only makes all the existing files executable not ones that I'm adding later. Is there a shell command or perhaps a shell script that you can suggest that can make this happen ? Thanks.
To make permissions apply to new files, you need an ACL (access control list). The main tool to do this is setfacl. You can set ACLs on directories so that new files created in them are always world-writable, or owned by a specific group. You are specifically interested in making new files executable. That would be done with: sudo setfactl -Rm d:u::rwx dir That means, "recursively set default user permissions as rwx for new files". When I experiment I get this: $ mkdir dir $ getfacl dir user::rwx group::r-x other::r-x $ setfacl -Rm d:u::rwx dir $ getfacl dir user::rwx group::r-x other::r-x default:user::rwx default:group::r-x default:other::r-x Cool, We've added some default: lines which now say that new files in this directory will have these specific permissions applied. But when I touch the new file we see: touch dir/file ls -l dir -rw-r--r-- 1 usr grp 0 Aug 19 10:57 file It's not user-executable! The man page says: The perms field is a combination of characters that indicate the read (r), write (w), execute (x) permissions. Dash characters in the perms field (-) are ignored. The character X stands for the execute permission if the file is a directory or already has execute permission for some user. Alternatively, the perms field can define the permissions numerically, as a bit-wise combination of read (4), write (2), and execute (1). Zero perms fields or perms fields that only consist of dashes indicate no permissions. I've made the relevant part of that bold. We can set the x ACL so that new files are executable, BUT that will only apply if the file already has execute permissions for some user. This is a limitation. I assume it's a security limitation so that malicious applications can't stick any file they like in a directory, have it automatically become executable, and then run it. To demonstrate how ACLs could be used to do something similar, I'll show another example: setfacl -Rm d:g::rw dir touch dir/file1 ls -l dir/file1 -rw-rw-r-- 1 usr grp 0 Aug 19 11:00 dir/file1 You can see that I told the ACLs to add a default rule to make new files group-writable. When I made the new file, I confirmed that it was group writable (while new files are usually only group readable).
Make every file in a directory executable by default?
1,356,038,512,000
I'm trying to add pkill to my sudoers file, but I think I need the full path for it to not give a syntax error. Does anybody know how to find it?
A generic way to find where a command comes from, if your shell supports it (bash does), is the type built-in. For example: $ type pkill pkill is /usr/bin/pkill For non-commands, it may print different things, for example: $ type cd cd is a shell builtin
What is the full path for the pkill command? [duplicate]
1,356,038,512,000
A user has use of an application running on a Linux server. The application provides the user with an API that allows reading and writing files on the server, but does not offer any means of executing a file. Is that enough to ensure the user cannot execute commands on the server? The underlying filesystem is not mounted with noexec. The user can choose which file to read and write, and can create new files with arbitrary names. The user can delete files. The application does not have access to "system" files, running as a relatively standard unprivileged user account similar to what a desktop user would have.
Arbitrary names at arbitrary locations, limited only by filesystem permissions is probably escalatable to execute arbitrary code. There are a lot of files in $HOME that are automatically run upon login, for example. And new ones are added (e.g., all the systemd user session stuff is fairly recent). Or maybe $HOME/bin is by default put at the front of $PATH. Other good targets for an attacker would be ~/.ssh; I wasn't supposed to have login access, but will I once I install an authorized keys file? You can of course disable this via config in /etc/ssh/, but that's just one program. There are probably others. I have no doubt you could secure this, but it'd be a lot of work (and you'd have to be very careful on OS upgrades!) If however you can limit it to arbitrary files, but limited to certain directories (say, only in /srv/yourapp/ and subdirs), that is safe (provided its programmed correctly).
If a user can only read and write files, is that sufficient to prevent execution?
1,356,038,512,000
I'm stuck with the following (simple) problem : I want a script to be executed every 10 minutes. This script calls executable files. I use crontab and ksh on a AIX 5.3 system. The script makes use of relative paths, but changing the executable path to absolute didn't make any difference. So, after a few tries and this answer, I came up with the following crontab entry (*/10 doesn't work) rs14:/home/viloin# crontab -l 0,10,20,30,40,50 * * * * cd /home/viloin/cardme/bin && /bin/ksh myScript.ksh here is the script : #!/bin/ksh Main(){ printf "executed in : %s\n" $(pwd); executableFile 2>/dev/null 1>&2; exeResult=$?; # expected return value : 90 printf "%s\n" $exeResult; } Main; Here is the output when I run the command manually : rs14:/home/viloin/cardme/bin# cd /home/viloin/cardme/bin && /bin/ksh myScript.ksh executed in : /home/viloin/cardme/bin 90 And finally the output when crontab runs it for me (from mail) : Subject: Output from cron job cd /home/viloin/cardme/bin && /bin/ksh myScript.ksh, [email protected], exit status 0 Cron Environment: SHELL = /usr/bin/sh PATH=/usr/bin:/etc:/usr/sbin:/usr/ucb:/usr/bin/X11:/sbin:/usr/java14/jre/bin:/usr/java14/bin CRONDIR=/var/spool/cron/crontabs ATDIR=/var/spool/cron/atjobs LOGNAME=viloin HOME=/home/viloin Your "cron" job executed on rs14.saprr.local on Wed Aug 24 11:50:00 CEST 2016 cd /home/viloin/cardme/bin && /bin/ksh myScript.ksh produced the following output: executed in : /home/viloin/cardme/bin 127 ************************************************* Cron: The previous message is the standard output and standard error of one of your cron commands. My file myScript.ksh has all rights : rs14:/home/viloin/cardme/bin# ll -al myScript.ksh -rwxrwxrwx 1 viloin cardme 174 Aug 24 10:54 myScript.ksh To make sure that my executableFile is not really exiting with code 127, I used the echo binaries, renamed it and I have the same behavior (Except that it returns 0 instead of 90 when I run the command manually). What is causing this difference between manually typing the command and asking crontab to do it for me ?
change your shell script to provide a full or relative path to the executable: ./executableFile ... In interactive use, you must either have . or the cardme/bin directory in your PATH: that will not be true in cron's environment.
Execution of a program called by a shell called by crontab returns code 127
1,356,038,512,000
On my MIPS box I'm trying to run a program. I use a cross compiler for mips. When I run my program, I'm getting Illegal instruction I pulled of one binary from it, called cputest. It basically prints “hello world” with some delay. Here is what readelf tells about it: readelf -a ./cputest.mips ELF Header: Magic: 7f 45 4c 46 01 02 01 00 00 00 00 00 00 00 00 00 Class: ELF32 Data: 2's complement, big endian Version: 1 (current) OS/ABI: UNIX - System V ABI Version: 0 Type: EXEC (Executable file) Machine: MIPS R3000 Version: 0x1 Entry point address: 0x4004e0 Start of program headers: 52 (bytes into file) Start of section headers: 1956 (bytes into file) Flags: 0x1007, noreorder, pic, cpic, o32, mips1 Size of this header: 52 (bytes) Size of program headers: 32 (bytes) Number of program headers: 8 Size of section headers: 40 (bytes) Number of section headers: 20 Section header string table index: 19 Section Headers: [Nr] Name Type Addr Off Size ES Flg Lk Inf Al [ 0] NULL 00000000 000000 000000 00 0 0 0 [ 1] .interp PROGBITS 00400134 000134 000014 00 A 0 0 1 [ 2] .reginfo MIPS_REGINFO 00400148 000148 000018 18 A 0 0 4 [ 3] .dynamic DYNAMIC 00400160 000160 0000c8 08 A 6 0 4 [ 4] .hash HASH 00400228 000228 000058 04 A 5 0 4 [ 5] .dynsym DYNSYM 00400280 000280 000110 10 A 6 1 4 [ 6] .dynstr STRTAB 00400390 000390 0000d0 00 A 0 0 1 [ 7] .init PROGBITS 00400460 000460 000028 00 AX 0 0 4 [ 8] .text PROGBITS 00400490 000490 0000b0 00 AX 0 0 16 [ 9] .MIPS.stubs PROGBITS 00400540 000540 000040 00 AX 0 0 4 [10] .fini PROGBITS 00400580 000580 000028 00 AX 0 0 4 [11] .rodata PROGBITS 004005a8 0005a8 000010 01 AMS 0 0 4 [12] .data PROGBITS 004105c0 0005c0 000010 00 WA 0 0 16 [13] .rld_map PROGBITS 004105d0 0005d0 000004 00 WA 0 0 4 [14] .got PROGBITS 004105e0 0005e0 000020 04 WAp 0 0 16 [15] .pdr PROGBITS 00000000 000600 0000c0 00 0 0 4 [16] .comment PROGBITS 00000000 0006c0 000033 01 MS 0 0 1 [17] .gnu.attributes LOOS+ffffff5 00000000 0006f3 000010 00 0 0 1 [18] .mdebug.abi32 PROGBITS 00000010 000703 000000 00 0 0 1 [19] .shstrtab STRTAB 00000000 000703 0000a1 00 0 0 1 Key to Flags: W (write), A (alloc), X (execute), M (merge), S (strings) I (info), L (link order), G (group), T (TLS), E (exclude), x (unknown) O (extra OS processing required) o (OS specific), p (processor specific) There are no section groups in this file. Program Headers: Type Offset VirtAddr PhysAddr FileSiz MemSiz Flg Align PHDR 0x000034 0x00400034 0x00400034 0x00100 0x00100 R E 0x4 INTERP 0x000134 0x00400134 0x00400134 0x00014 0x00014 R 0x1 [Requesting program interpreter: /lib/ld-uClibc.so.0] REGINFO 0x000148 0x00400148 0x00400148 0x00018 0x00018 R 0x4 LOAD 0x000000 0x00400000 0x00400000 0x005b8 0x005b8 R E 0x10000 LOAD 0x0005c0 0x004105c0 0x004105c0 0x00040 0x00040 RW 0x10000 DYNAMIC 0x000160 0x00400160 0x00400160 0x000c8 0x000c8 RWE 0x4 GNU_STACK 0x000000 0x00000000 0x00000000 0x00000 0x00000 RWE 0x4 NULL 0x000000 0x00000000 0x00000000 0x00000 0x00000 0x4 Section to Segment mapping: Segment Sections... 00 01 .interp 02 .reginfo 03 .interp .reginfo .dynamic .hash .dynsym .dynstr .init .text .MIPS.stubs .fini .rodata 04 .data .rld_map .got 05 .dynamic 06 07 Dynamic section at offset 0x160 contains 20 entries: Tag Type Name/Value 0x00000001 (NEEDED) Shared library: [libc.so.0] 0x0000000f (RPATH) Library rpath: [/home/xia/Builds/H208N_V1.0_Dev/csp/release/tools/uclibc/lib] 0x0000000c (INIT) 0x400460 0x0000000d (FINI) 0x400580 0x00000004 (HASH) 0x400228 0x00000005 (STRTAB) 0x400390 0x00000006 (SYMTAB) 0x400280 0x0000000a (STRSZ) 208 (bytes) 0x0000000b (SYMENT) 16 (bytes) 0x70000016 (MIPS_RLD_MAP) 0x4105d0 0x00000015 (DEBUG) 0x0 0x00000003 (PLTGOT) 0x4105e0 0x70000001 (MIPS_RLD_VERSION) 1 0x70000005 (MIPS_FLAGS) NOTPOT 0x70000006 (MIPS_BASE_ADDRESS) 0x400000 0x7000000a (MIPS_LOCAL_GOTNO) 2 0x70000011 (MIPS_SYMTABNO) 17 0x70000012 (MIPS_UNREFEXTNO) 19 0x70000013 (MIPS_GOTSYM) 0xb 0x00000000 (NULL) 0x0 There are no relocations in this file. The decoding of unwind sections for machine type MIPS R3000 is not currently supported. Symbol table '.dynsym' contains 17 entries: Num: Value Size Type Bind Vis Ndx Name 0: 00000000 0 NOTYPE LOCAL DEFAULT UND 1: 004105c0 0 NOTYPE GLOBAL DEFAULT 12 _fdata 2: 00000001 0 SECTION GLOBAL DEFAULT ABS _DYNAMIC_LINKING 3: 004185d0 0 NOTYPE GLOBAL DEFAULT ABS _gp 4: 00400490 0 NOTYPE GLOBAL DEFAULT 8 _ftext 5: 004105d0 0 OBJECT GLOBAL DEFAULT 13 __RLD_MAP 6: 00410600 0 NOTYPE GLOBAL DEFAULT ABS __bss_start 7: 00410600 0 NOTYPE GLOBAL DEFAULT ABS _edata 8: 004105e0 0 OBJECT GLOBAL DEFAULT ABS _GLOBAL_OFFSET_TABLE_ 9: 00410600 0 NOTYPE GLOBAL DEFAULT ABS _end 10: 00410600 0 NOTYPE GLOBAL DEFAULT ABS _fbss 11: 00400580 28 FUNC GLOBAL DEFAULT 10 _fini 12: 00400490 72 FUNC GLOBAL DEFAULT 8 main 13: 00400560 0 FUNC GLOBAL DEFAULT UND __uClibc_main 14: 00400460 28 FUNC GLOBAL DEFAULT 7 _init 15: 00400550 0 FUNC GLOBAL DEFAULT UND sleep 16: 00400540 0 FUNC GLOBAL DEFAULT UND printf Histogram for bucket list length (total of 3 buckets): Length Number % of total Coverage 0 0 ( 0.0%) 1 0 ( 0.0%) 0.0% 2 0 ( 0.0%) 0.0% 3 0 ( 0.0%) 0.0% 4 0 ( 0.0%) 0.0% 5 2 ( 66.7%) 62.5% 6 1 ( 33.3%) 100.0% No version information found in this file. Attribute Section: gnu File Attributes Tag_GNU_MIPS_ABI_FP: Soft float Primary GOT: Canonical gp value: 004185d0 Reserved entries: Address Access Initial Purpose 004105e0 -32752(gp) 00000000 Lazy resolver 004105e4 -32748(gp) 80000000 Module pointer (GNU extension) Global entries: Address Access Initial Sym.Val. Type Ndx Name 004105e8 -32744(gp) 00400580 00400580 FUNC 10 _fini 004105ec -32740(gp) 00400490 00400490 FUNC 8 main 004105f0 -32736(gp) 00400560 00400560 FUNC UND __uClibc_main 004105f4 -32732(gp) 00400460 00400460 FUNC 7 _init 004105f8 -32728(gp) 00400550 00400550 FUNC UND sleep 004105fc -32724(gp) 00400540 00400540 FUNC UND printf When I cross compile my program (which just prints “hello world”) without the -static flag and try to run it, here is what happens: # ls hello.mips # ./hello.mips /bin/sh: ./hello.mips: Permission denied # chmod +x hello.mips # ./hello.mips /bin/sh: ./hello.mips: not found # ls -la drwxrwxrwx 2 zhangxia root 0 Aug 8 00:01 . drwxr-xr-x 3 zhangxia root 0 Aug 7 22:46 .. -rwsrwsrwx 1 888 root 5743 Aug 8 00:01 hello.mips Why can't I find it when it's there? So I compile it with the -static flag and here is the readelf output (because of size limits here I will put just a part) readelf -a hello.static ELF Header: Magic: 7f 45 4c 46 01 02 01 00 00 00 00 00 00 00 00 00 Class: ELF32 Data: 2's complement, big endian Version: 1 (current) OS/ABI: UNIX - System V ABI Version: 0 Type: EXEC (Executable file) Machine: MIPS R3000 Version: 0x1 Entry point address: 0x400280 Start of program headers: 52 (bytes into file) Start of section headers: 647608 (bytes into file) Flags: 0x1007, noreorder, pic, cpic, o32, mips1 Size of this header: 52 (bytes) Size of program headers: 32 (bytes) Number of program headers: 6 Size of section headers: 40 (bytes) Number of section headers: 33 Section header string table index: 30 Section Headers: [Nr] Name Type Addr Off Size ES Flg Lk Inf Al [ 0] NULL 00000000 000000 000000 00 0 0 0 [ 1] .note.ABI-tag NOTE 004000f4 0000f4 000020 00 A 0 0 4 [ 2] .reginfo MIPS_REGINFO 00400114 000114 000018 18 A 0 0 4 [ 3] .note.gnu.build-i NOTE 0040012c 00012c 000024 00 A 0 0 4 [ 4] .rel.dyn REL 00400150 000150 000098 08 A 0 0 4 [ 5] .init PROGBITS 004001e8 0001e8 000098 00 AX 0 0 4 [ 6] .text PROGBITS 00400280 000280 07b5a0 00 AX 0 0 16 [ 7] __libc_freeres_fn PROGBITS 0047b820 07b820 0013a8 00 AX 0 0 4 [ 8] .fini PROGBITS 0047cbc8 07cbc8 000054 00 AX 0 0 4 [ 9] .rodata PROGBITS 0047cc20 07cc20 015a00 00 A 0 0 16 [10] .eh_frame PROGBITS 004a2620 092620 0019a4 00 WA 0 0 4 [11] .gcc_except_table PROGBITS 004a3fc4 093fc4 00014e 00 WA 0 0 1 [12] .tdata PROGBITS 004a4114 094114 000010 00 WAT 0 0 4 [13] .tbss NOBITS 004a4124 094124 000018 00 WAT 0 0 4 [14] .ctors PROGBITS 004a4124 094124 000008 00 WA 0 0 4 [15] .dtors PROGBITS 004a412c 09412c 00000c 00 WA 0 0 4 [16] .jcr PROGBITS 004a4138 094138 000004 00 WA 0 0 4 [17] .data.rel.ro PROGBITS 004a413c 09413c 00259c 00 WA 0 0 4 [18] .data PROGBITS 004a66e0 0966e0 0007c0 00 WA 0 0 16 [19] __libc_subfreeres PROGBITS 004a6ea0 096ea0 000030 00 WA 0 0 4 [20] __libc_atexit PROGBITS 004a6ed0 096ed0 000004 00 WA 0 0 4 [21] .got PROGBITS 004a6ee0 096ee0 000a48 04 WAp 0 0 16 [22] .sdata PROGBITS 004a7928 097928 000004 00 WAp 0 0 4 [23] .sbss NOBITS 004a7930 09792c 0000fc 00 WAp 0 0 8 [24] .bss NOBITS 004a7a30 09792c 001c10 00 WA 0 0 16 [25] __libc_freeres_pt NOBITS 004a9640 09792c 000018 00 WA 0 0 4 [26] .pdr PROGBITS 00000000 09792c 006700 00 0 0 4 [27] .comment PROGBITS 00000000 09e02c 000039 01 MS 0 0 1 [28] .gnu.attributes LOOS+ffffff5 00000000 09e065 000010 00 0 0 1 [29] .mdebug.abi32 PROGBITS 00001320 09e075 000000 00 0 0 1 [30] .shstrtab STRTAB 00000000 09e075 000140 00 0 0 1 [31] .symtab SYMTAB 00000000 09e6e0 006d70 10 32 655 4 [32] .strtab STRTAB 00000000 0a5450 0065f4 00 0 0 1 Key to Flags: W (write), A (alloc), X (execute), M (merge), S (strings) I (info), L (link order), G (group), T (TLS), E (exclude), x (unknown) O (extra OS processing required) o (OS specific), p (processor specific) There are no section groups in this file. Program Headers: Type Offset VirtAddr PhysAddr FileSiz MemSiz Flg Align REGINFO 0x000114 0x00400114 0x00400114 0x00018 0x00018 R 0x4 LOAD 0x000000 0x00400000 0x00400000 0x92620 0x92620 R E 0x10000 LOAD 0x092620 0x004a2620 0x004a2620 0x0530c 0x07038 RW 0x10000 NOTE 0x0000f4 0x004000f4 0x004000f4 0x00020 0x00020 R 0x4 NOTE 0x00012c 0x0040012c 0x0040012c 0x00024 0x00024 R 0x4 TLS 0x094114 0x004a4114 0x004a4114 0x00010 0x00028 R 0x4 Section to Segment mapping: Segment Sections... 00 .reginfo 01 .note.ABI-tag .reginfo .note.gnu.build-id .rel.dyn .init .text __libc_freeres_fn .fini .rodata 02 .eh_frame .gcc_except_table .tdata .ctors .dtors .jcr .data.rel.ro .data __libc_subfreeres __libc_atexit .got .sdata .sbss .bss __libc_freeres_ptrs 03 .note.ABI-tag 04 .note.gnu.build-id 05 .tdata .tbss There is no dynamic section in this file. Relocation section '.rel.dyn' at offset 0x150 contains 19 entries: Offset Info Type Sym.Value Sym. Name 00000000 00000000 R_MIPS_NONE 00000000 00000000 R_MIPS_NONE 00000000 00000000 R_MIPS_NONE 00000000 00000000 R_MIPS_NONE 00000000 00000000 R_MIPS_NONE 00000000 00000000 R_MIPS_NONE 00000000 00000000 R_MIPS_NONE 00000000 00000000 R_MIPS_NONE 00000000 00000000 R_MIPS_NONE 00000000 00000000 R_MIPS_NONE 00000000 00000000 R_MIPS_NONE 00000000 00000000 R_MIPS_NONE 00000000 00000000 R_MIPS_NONE 00000000 00000000 R_MIPS_NONE 00000000 00000000 R_MIPS_NONE 00000000 00000000 R_MIPS_NONE 00000000 00000000 R_MIPS_NONE 00000000 00000000 R_MIPS_NONE 00000000 00000000 R_MIPS_NONE The decoding of unwind sections for machine type MIPS R3000 is not currently supported. Symbol table '.symtab' contains 1751 entries: Num: Value Size Type Bind Vis Ndx Name 0: 00000000 0 NOTYPE LOCAL DEFAULT UND 1: 004000f4 0 SECTION LOCAL DEFAULT 1 2: 00400114 0 SECTION LOCAL DEFAULT 2 1747: 004a5d58 36 OBJECT GLOBAL DEFAULT 17 _nl_C_LC_IDENTIFICATION 1748: 004a9580 76 OBJECT GLOBAL DEFAULT 24 _dl_ns 1749: 00450f20 3016 FUNC GLOBAL DEFAULT 6 _nl_load_locale_from_arch 1750: 004380e0 248 FUNC WEAK DEFAULT 6 wctrans No version information found in this file. Displaying notes found at file offset 0x000000f4 with length 0x00000020: Owner Data size Description GNU 0x00000010 NT_GNU_ABI_TAG (ABI version tag) OS: Linux, ABI: 2.6.18 Displaying notes found at file offset 0x0000012c with length 0x00000024: Owner Data size Description GNU 0x00000014 NT_GNU_BUILD_ID (unique build ID bitstring) Build ID: a56a4b258e108ec9affb61c4a8ba46527052bca9 Attribute Section: gnu File Attributes Tag_GNU_MIPS_ABI_FP: Hard float (double precision) Both binaries (static and dynamic) runs just fine in quemu and on my second MIPS box. Any thoughts? Could “Illegal instruction” be due to different ld-uClibc, libc names/versions? @Stephen-Kitt Here it is # ldd ./hello.mips /bin/sh: ldd: not found cat proc/version Linux version 2.6.30.9 (xia@njzd) (gcc version 4.4.6 (Realtek RSDK-1.5.6p2) ) #2 Wed Apr 29 18:57:54 CST 2015 # cat proc/cpuinfo system type: RTL8672 processor: 0 cpu model : 56322 BogoMIPS: 619.31 tlb_entries : 64 mips16 implemented : yes Im running Ubuntu14.04 basicly mips-gcc -o hello.mips hello.c mips-gcc -static -o hello.static hello.c mips gcc is from Debian rep $ mips-linux-gnu-gcc -v Using built-in specs. Target: mips-linux-gnu Configured with: ../src/configure -v --with-pkgversion='Debian 4.4.5-8' --with-bugurl=file:///usr/share/doc/gcc-4.4/README.Bugs --enable-languages=c,c++,fortran,objc,obj-c++ --prefix=/usr --program-suffix=-4.4 --enable-shared --enable-multiarch --enable-linker-build-id --with-system-zlib --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --with-gxx-include-dir=/usr/mips-linux-gnu/include/c++/4.4.5 --libdir=/usr/lib --enable-nls --enable-clocale=gnu --enable-libstdcxx-debug --disable-libssp --enable-targets=all --enable-checking=release --program-prefix=mips-linux-gnu- --includedir=/usr/mips-linux-gnu/include --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=mips-linux-gnu --with-headers=/usr/mips-linux-gnu/include --with-libs=/usr/mips-linux-gnu/lib Thread model: posix gcc version 4.4.5 (Debian 4.4.5-8) I obtained it from here http://www.emdebian.org/debian/
# cat proc/cpuinfo system type: RTL8672 processor: 0 cpu model: 56322 An RTL8672 is not a full MIPS implementation, but a Lexra. You will need a customized toolchain that knows how to handle this. Something like this, or for a binary-only toolchain, look for rsdk; e.g., this.
“Illegal instruction” on a static MIPS binary
1,356,038,512,000
I'm having an issue where when I transfer a Python file to my VPS via FTP and try to run it using ./foo.py I am returned with the error: : No such file or directory. The error seems to indicate that the file I am trying to execute does not exist. But I can run the program with no problems using python foo.py which leads me to believe that the error actually probably means something else. At first I thought it could be an issue with the shebang line, so I copied all of the content of the file and pasted it into a new file on the VPS that had not been transferred via FTP. The two files had exactly the same content but when I ran the new file using ./bar.py it ran as expected. So I've come to the conclusion that this could be an issue with the way that it is transferred. I have switched between ASCII and binary but both of these transfer methods give the same error. Is it possible to stop this from happening?
This happens when a file contains \r\n as a line terminator instead of \n, since \r is a C0 control code meaning "go to the beginning of the current line". To fix, run dos2unix foo.py. Example session: ben@joyplim /tmp/cr % echo '#!/usr/bin/env python' > foo.py ben@joyplim /tmp/cr % chmod +x foo.py ben@joyplim /tmp/cr % ./foo.py ben@joyplim /tmp/cr % unix2dos foo.py unix2dos: converting file foo.py to DOS format ... ben@joyplim /tmp/cr % ./foo.py : No such file or directory ben@joyplim /tmp/cr % ./foo.py 2>&1 | xxd 0000000: 2f75 7372 2f62 696e 2f65 6e76 3a20 7079 /usr/bin/env: py 0000010: 7468 6f6e 0d3a 204e 6f20 7375 6368 2066 thon.: No such f 0000020: 696c 6520 6f72 2064 6972 6563 746f 7279 ile or directory 0000030: 0a . Specifically note the 0d3a in the output.
Why does trying to run a python executable return ': No such file or directory' after transferring it to server via FTP? [duplicate]
1,356,038,512,000
I downloaded and extracted the VSCode zip. I see the Code binary file, but doubleclicking it in my file manager does nothing. I also tried ./Code in console, but I only get bash: ./Code: cannot execute binary file. Just typing Code causes bash: Code: command not found. My guess is that it might be a dependancy issue, but I don't even know where to start. I tried to chmod 777 the files and folders, but no luck. uname -a: Linux crunchbang 3.2.0-4-686-pae #1 SMP Debian 3.2.41-2 i686 GNU/Linux Running strace produces: $ strace ./Code execve("./Code", ["./Code"], [/* 25 vars */]) = -1 ENOEXEC (Exec format error) dup(2) = 3 fcntl64(3, F_GETFL) = 0x2 (flags O_RDWR) fstat64(3, {st_mode=S_IFCHR|0620, st_rdev=makedev(136, 2), ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb76ec000 _llseek(3, 0, 0xbf9d8a88, SEEK_CUR) = -1 ESPIPE (Illegal seek) write(3, "strace: exec: Exec format error\n", 32strace: exec: Exec format error ) = 32 close(3) = 0 munmap(0xb76ec000, 4096) = 0 exit_group(1) = ? Running file produces $ file Code Code: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.24, BuildID[sha1]=0x7a776e173e68b15269ebd273dd987b526f5ebcae, stripped
I found the solution based on another Q&A So based on the file the Code is a 64bit executable and based on uname my system apparently is 32bit, which is different from what I thought.
Unable to start vscode executable
1,356,038,512,000
Actually I have installed Ant in rhel5 environment and set ANT_HOME variable in /etc/profile which points to /usr/local/ant/bin so when I execute command echo $ANT_HOME it gives correct path but I get ant:command not found when I tried executing it from other directory. I installed it using a tar named apache-ant-1.7.1-bin.tar.gz and untared it. I tried the command below ln -s /usr/local/ant/apache-ant-1.7.1/bin /usr/local/ant/apache-ant-1.7.1/bin/ant ln: creating symbolic link /usr/local/ant/apache-ant-1.7.1/bin/ant' to /usr/local/ant/apache-ant-1.7.1/bin': File exists but when I run ant it still says command not found
This should make it work: export PATH=$PATH:/usr/local/ant/apache-ant-1.7.1/bin Make sure you update this PATH in your .bash_profile file or in any of the startup scripts under /etc/profile.d
ant command not found when running from the path othe than ant installation directory
1,356,038,512,000
I'm a bit confused. I just read this: http://www.es.freebsd.org/doc/handbook/binary-formats.html, which basically says that freeBSD uses the elf binary format. But when I compile my code I using cc, I get a file called a.out. So what's going on here? Can I somehow specify in which format cc should build my code? Does freeBSD just support both formats? Is the resulting executable actually in elf format, but is it just called a.out for some reason:P?
The a.out file is still leftover from when compilers were using the a.out format. If you check the file with file a.out you will see it is actually in ELF format. To specify the name of the output file, use cc -o exec_name code.c.
How to compile to a specific executable format?
1,358,240,936,000
In a linux machine, we may have to compile our programs with respect to that linux machine. Now, if we already have some other users (not root. A typical user.) who have already compiled many programs for this machine, is it possible to do something like this? For instance, user oldGuy got mpirun, python, and several other programs in his home directory, and he can invoke "mpirun" or any other binaries in his directory without having to type "./mpirun". Bash knows which binary he is referring to. He also has various other settings done. Now, suppose we have a new user called newGuy. If in our server, oldGuy already has compiled all the binaries that newGuy wanted, instead of having the newGuy wasting his time compiling programs that oldGuy already has and set everything correctly, can the newGuy "inherit" some binaries, settings, etc. from oldGuy? For example, oldGuy can simply invoke "mpirun" right from Bash, can newGuy do anything in order to be on the same page (all identical settings) with oldGuy right away, without having to compile the programs and set other settings, etc?
When you execute a program by typing its name (with no directory part, e.g. just mpirun with possible arguments), the system looks for a file by that name in a list of directories called the program search path, or path for short. This path is determined by the environment variable PATH, which contains a colon-separated list of directories, for example /usr/local/bin:/usr/bin:/bin to look first in /usr/local/bin, then /usr/bin, then /bin. You can add directories to your search path. For example, if joe has installed some programs in his home directory /home/joe with the executables in /home/joe/bin, the following line adds /home/joe/bin at the end of the existing search path: PATH=$PATH:/home/joe/bin In most environments, for this setting to take effect, add the line to the file called .profile in your home directory. If that file doesn't exist, create it. If you log in in a graphical environment, depending on your environment and distribution, .profile may not be read. In this case, look in your environment's documentation or ask here, stating exactly what operating system, distribution and desktop environment you're running. If you log in in text mode (e.g. over SSH) and .profile isn't read but there is a file called .bash_profile, add the line to .bash_profile.
Applying and linking all settings/binaries of one user to another user in Linux
1,358,240,936,000
I have downloaded VS Code on Elementary OS, and I followed the setup instruction of Setting up Visual Studio Code on linux. The executable file "code" seems to be unknown, it is not opening, Any ideas?
Try to open it by running in a terminal (Ctrl+Alt+T): /opt/VSCode-linux-x64/Code If it work you can make a shortcut (so you will only need to run code to open Code): sudo ln -s /opt/VSCode-linux-x64/Code /usr/local/bin/code
VS Code not working on Elementary OS
1,358,240,936,000
I hope my title isn't confusing. I've got a CentOS 5 machine and I had Ruby 1.8.7 installed on here. So in order to upgrade my Ruby installation and gems/rails I Uninstalled Ruby: sudo yum remove ruby Downloaded the latest stable release of ruby and untared it: wget... && tar -zxf .... Went through the usual installation: ./configure --prefix=$HOME make sudo make install Downloaded rubygems: wget.... ran the setup file: ruby setup.rb Now my issue is that if I try to install rails, which I do by typing: gem install rails, I get the following message: "-bash: /usr/local/bin/gem: /usr/local/bin/ruby: bad interpreter: Permission denied" So the next logical move (for me) was to type: sudo gem install rails, but that returns "sudo: gem: command not found", which means I've screwed up something royally. Just to add some more information whereis ruby: ruby: /usr/lib/ruby /usr/lib64/ruby /usr/local/bin/ruby /usr/local/lib/ruby which ruby: ~/bin/ruby I'm thinking that by installing ruby manually from source I've screwed up something, perhaps the --prefix=$HOME is the culprit here?
It seems you didn't uninstall the package that provides the gem executable, so it is still in /usr/local/bin/, and points to the no longer present /usr/local/bin/ruby interpreter. You can either uninstall that package (recommended, since you've also removed the ruby package it depends upon), or just make sure ~/bin is before /usr/local/bin on your PATH. (Alternatively, if you have root access, you could just rerun the ./configure script without specifying --prefix=${HOME}, and let it install in /usr/local/bin, which is Ruby's default.) Once you've arranged things so that your shell finds the gem executable installed in ~/bin, you should be able to simply gem install rails without needing sudo. (Or, if you go for the root install into /usr/local/bin, make sure gem is at /usr/local/bin/gem, and then run sudo gem install rails, as you tried before). Possibly a better approach would have been to look at either rvm or rbenv, both of which make managing multiple rubies a fairly painless task. Using either of these tools, you can have several versions of ruby installed without the need to remove the system-wide one, which might be needed by other packages on the system.
Path issues with a source install
1,358,240,936,000
I'm using x64 Ubuntu. A few months ago I accidentally messed up the groups/owners of all files on /, but managed to fix it using a VirtualBox install of Ubuntu. Now I'm running into a problem that I think is related to that mistake. When I try to reinstall ia32-libs (Skype is having problems so I need to reinstall those libs) I get an error message: /var/lib/dpkg/info/ia32-libs.postinst: 40: /usr/lib32/gdk-pixbuf-2.0/gdk-pixbuf-query-loaders: Permission denied ls -al /usr/lib32/gdk-pixbuf-2.0/` is this: total 476 drwxr-xr-x 3 root root 4096 2011-09-24 17:08 . drwxr-xr-x 53 root root 143360 2011-09-24 17:08 .. drwxr-xr-x 3 root root 40 2011-09-24 04:44 2.10.0 -rwxr-xr-x 1 root root 9648 2011-04-05 00:40 gdk-pixbuf-query-loaders I have tried to reinstall gdk-pixbuff-2.0, but it didn't work. How can I fix this?
Run ldd /usr/lib32/gdk-pixbuf-2.0//gdk-pixbuf-query-loaders and make sure every file is accounted for (the line must end with an address like (0xf7789000)). In particular, check the permissions on the dynamic loader /lib/ld-linux.so.2. This is the only file in the lot that could cause that particular error message, but you may need to fix other permissions while you're at it. chown root:root /lib*/* chmod a+rx /lib*/ld-* /lib*/*/ chmod -R a+r /lib
Problem when installing “ia32-libs”
1,358,240,936,000
I had ldc2 and gdc compiled from source and working up until a month ago. Nothing has changed, except I can't remember the variable(s) I would set in the terminal to get ldc2 and gdc to work. I get the following errors when trying to compile D source code; with gdc ($ /home/Code/D/gdc/Bin/usr/local/bin/gdc -o t4 t4.d): /home/Code/D/gdc/Bin/usr/local/bin/../libexec/gcc/x86_64-unknown-linux-gnu/4.4.5/cc1d: error while loading shared libraries: libmpfr.so.1: cannot open shared object file: No such file or directory With ldc2 (/home/Code/D/ldc2/bin/ldc2 -o t4 t4.d): /home/Code/D/ldc2/bin/ldc2: error while loading shared libraries: libconfig++.so.8: cannot open shared object file: No such file or directory I can't remember if it was just an addition to PATH or something to DFLAGS. Any ideas?
Here you can't even run the compiler executable, because it can't find the libraries it needs. gdc is looking for libmpfr.so.1 and ldc2 is looking for libconfig++.so.8. If these libraries are still present on your system, perhaps in /home/Code/D/gdc/Bin/usr/local/lib, you can add that directory to the LD_LIBRARY_PATH environment variable (on most unices; on Mac OS X, the variable is called DYLD_LIBRARY_PATH). LD_LIBRARY_PATH=/home/Code/D/gdc/Bin/usr/local/lib gdc … You may want to write wrapper scripts to run gdc and ldc2, or put this in your ~/.profile: export LD_LIBRARY_PATH=/home/Code/D/gdc/Bin/usr/local/lib If these libraries were in /usr/lib and disappeared in a system upgrade, you'll have to either restore the required versions, or recompile the D tools for the new versions of the libraries.
Cannot open shared object file when using D compiler
1,358,240,936,000
I am trying to install the VMware client on my work computer, which is running CentOS 7 and on which I have superuser privileges. When I run the command sudo ./VMware-Horizon-Client-5.2.0-14604769.x64.bundle I get the following error message sudo: unable to execute ./VMware-Horizon-Client-5.2.0-14604769.x64.bundle: Permission denied When I run the same command without sudo the file executes, but the installer brings up a dialog box with the following error message root access is required for the operations you have chosen. I've checked the file's permissions, and I have execute privileges. I've even tried temporarily setting the privileges to 777, but it made no difference. Moving the file to another directory doesn't seem to help. I've run df and then mount to make sure noexec isn't set for this device, and it is not. I've successfully installed programs on this computer before, so this behaviour seems particularly odd. Does anyone have any suggestions on how I might get this to work or other ways I might try installing the VMware client?
It sounds like you have NFS homes, and that the file is on a Kerberized NFS share, which means that even root can't read things in it. To work around it, as yourself (not root), copy the file to somewhere that isn't NFS (like /tmp), and then run it from there (or if /tmp is noexec, once it's there, copy it to somewhere else as root).
Unable to execute file with superuser priviliges
1,358,240,936,000
I have made a small Snake game using C. When I open it via GUI (Nemo), nothing gets opened. But if I open it using the terminal, it works as expected. I've tried it on multiple computers with different OSes (Ubuntu, Mint etc) but all exhibit the same problem. I also tried right clicking the file, selecting 'Open with other application' and typing the following in the 'Enter custom command to execute' textbox: gnome-terminal --working-directory ~/Downloads/My\ Programs/Snake -e "/bin/bash -c './Snake && read'" and then tried opening the executable. Still, nothing happens. But executing the same command using the terminal works perfectly. Permissions set for the Snake executable file are -rwxr-xr-x and its type is Program (application/x-executable). Also, file gives: $ file Snake Snake: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.32, BuildID[sha1]=c609e53bda05544c647aab2a19aa865af6dc93c2, not stripped What could be the problem?
This is the same result you get when browsing Nemo to /usr/bin and clicking on bash. Right? You can put the shell command you want to run, invoking gnome-terminal, in a shell script. (Nothing specific to it being a shell script. You could also code like isatty(STDERR) || execlp("gnome-terminal", "--working-directory", ...);). There is not a more standard bridge between the GUI and the terminal apps. If you want to package this as an app for the GUI, you have to construct your own GUI wrapper. There is no way to run a generic terminal emulator. Running executable files from the file browser is not universally considered a great idea. (Gnome Files doesn't anymore). Your file browser may or may not also support running .desktop files (i.e. without installing them to your desktop application menu/launcher). Neither of these are great for security, e.g. if an executable file on a removable device can set a custom icon to look like a Word document.
Opening an executable via GUI does nothing although opening it via terminal works
1,358,240,936,000
I am trying to start supervisor service supervisor start, as root, but it gives me env: /etc/init.d/supervisor: No such file or directory Yet, I can clearly see that file exists: [root@master vagrant]# ls -l /etc/init.d/ total 256 -rwxr-xr-x. 1 root root 2062 Oct 17 2014 atd -rwxr-xr-x. 1 root root 3378 Jun 22 2012 auditd -rwxr-xr-x. 1 root root 2826 Nov 23 2013 crond -rw-r--r--. 1 root root 18586 Oct 10 2013 functions -rwxr-xr-x. 1 root root 5866 Oct 10 2013 halt -rwxr-xr-x. 1 root root 10804 Nov 23 2013 ip6tables -rwxr-xr-x. 1 root root 10688 Nov 23 2013 iptables -rwxr-xr-x. 1 root root 652 Oct 10 2013 killall -r-xr-xr-x. 1 root root 2134 Nov 23 2013 lvm2-lvmetad -r-xr-xr-x. 1 root root 2665 Nov 23 2013 lvm2-monitor -rwxr-xr-x. 1 root root 2989 Oct 10 2013 netconsole -rwxr-xr-x. 1 root root 5428 Oct 10 2013 netfs -rwxr-xr-x. 1 root root 6334 Oct 10 2013 network -rwxr-xr-x. 1 root root 6364 Nov 22 2013 nfs -rwxr-xr-x. 1 root root 3526 Nov 22 2013 nfslock -rwxr-xr-x. 1 root root 3852 Dec 3 2011 postfix -rwxr-xr-x. 1 root root 5383 Mar 30 08:29 postgresql -rwxr-xr-x. 1 root root 1513 Sep 17 2013 rdisc -rwxr-xr-x. 1 root root 1822 Nov 22 2013 restorecond -rwxr-xr-x. 1 root root 2073 Feb 22 2013 rpcbind -rwxr-xr-x. 1 root root 2518 Nov 22 2013 rpcgssd -rwxr-xr-x. 1 root root 2305 Nov 22 2013 rpcidmapd -rwxr-xr-x. 1 root root 2464 Nov 22 2013 rpcsvcgssd -rwxr-xr-x. 1 root root 2011 Aug 15 2013 rsyslog -rwxr-xr-x. 1 root root 3085 May 11 21:07 salt-master -rwxr-xr-x. 1 root root 3332 May 11 21:07 salt-minion -rwxr-xr-x. 1 root root 1698 Nov 22 2013 sandbox -rwxr-xr-x. 1 root root 2056 Feb 27 15:57 saslauthd -rwxr-xr-x. 1 root root 647 Oct 10 2013 single -rwxr-xr-x. 1 root root 4534 Nov 22 2013 sshd -rwxr-xr-x. 1 root root 1345 Jun 11 10:20 supervisor -rwxr-xr-x. 1 root root 2294 Nov 22 2013 udev-post -rwxr-xr-x. 1 root root 15634 Mar 7 2014 vboxadd -rwxr-xr-x. 1 root root 5378 Mar 7 2014 vboxadd-service -rwxr-xr-x. 1 root root 20887 Mar 7 2014 vboxadd-x11 And it contains some script, as expected. What am I doing wrong?
It's not complaining about the /etc/init.d/supervisor file itself, but most likely about the file it wants to execute as - usually the shell that appears on the shebang line in that file. It's a somewhat misleading error that I've seen many times before.
Supervisor: no such file or directory even though it is there [duplicate]
1,358,240,936,000
I downloaded a Java tarball, extracted the archive and copied over scp to a remote machine. Using the ls command, the java executable exists: ubuntu@Ubuntu:~$ ls -la /home/ubuntu/jre1.7.0_55/bin/ total 420 drwxr-xr-x 2 ubuntu ubuntu 4096 Mar 18 03:54 . drwxr-xr-x 6 ubuntu ubuntu 4096 Apr 30 12:55 .. lrwxrwxrwx 1 ubuntu ubuntu 8 Apr 30 10:58 ControlPanel -> jcontrol -rwxr-xr-x 1 ubuntu ubuntu 5714 Mar 18 03:53 java -rwxr-xr-x 1 ubuntu ubuntu 16246 Mar 18 03:54 java_vm -rwxr-xr-x 1 ubuntu ubuntu 104497 Mar 18 03:54 javaws -rwxr-xr-x 1 ubuntu ubuntu 6391 Mar 18 03:54 jcontrol -rwxr-xr-x 1 ubuntu ubuntu 5873 Mar 18 03:53 keytool -rwxr-xr-x 1 ubuntu ubuntu 6013 Mar 18 03:53 orbd -rwxr-xr-x 1 ubuntu ubuntu 5893 Mar 18 03:53 pack200 -rwxr-xr-x 1 ubuntu ubuntu 5981 Mar 18 03:53 policytool -rwxr-xr-x 1 ubuntu ubuntu 5865 Mar 18 03:53 rmid -rwxr-xr-x 1 ubuntu ubuntu 5877 Mar 18 03:53 rmiregistry -rwxr-xr-x 1 ubuntu ubuntu 5893 Mar 18 03:53 servertool -rwxr-xr-x 1 ubuntu ubuntu 6045 Mar 18 03:53 tnameserv -rwxr-xr-x 1 ubuntu ubuntu 215380 Mar 18 03:53 unpack200 However: ubuntu@Ubuntu:~$ /home/ubuntu/jre1.7.0_55/bin/java -bash: /home/ubuntu/jre1.7.0_55/bin/java: No such file or directory What's the reason?
Possibly the (Java) binary you are executing ( which is 64 bit binary) on 32 bit OS
A file exists but executing it doesn't work
1,358,240,936,000
I'm reading a book, Learning Unix for OS X by Dave Taylor. It says: To quickly see all of the binary executables—Unix programs—on your system, Open the Terminal, hold down the Shift key, and press Esc-?, or press Control-X followed by Shift-1 (using Shift-1 to get an exclamation mark). Before the commands are displayed in the Terminal, however, you’ll first be prompted (asked) to make a choice: $ Display all 1453 possibilities? (y or n) If you press the n key on your keyboard, you’ll be taken back to a command prompt and nothing else will happen. However, if you press the y key, you’ll see a multi-column list of Unix commands stream past in the Terminal window. However, the problem is, when I hold down Shift key and press Esc-? nothing happens. Same for Pressing Control-X followed by Shift-1. What am I doing wrong? Is there any setting that I need to enable before using this feature? I'm using iTerm2 on Mac El Capitan. It doesn't work on the stock terminal either. Any help would be much appreciated. Thank you.
The instructions in the book are for bash. Zsh is a different program with different key bindings. In zsh, you can see a list of all commands (external, builtin, function, alias even keywords...) with: type -m '*' For just their names: whence -wm '*' | sed 's/:[^:]*$//' Or for the names of external commands only: print -rlo -- $commands:t | less $commands is an array that contains all external commands. The history modifier :t truncates the directory part of the command paths (keeps only the tail). print -rlo to print them raw in alphabetical order, one per line. Longer, but less cryptic: for p in "$path[@]"; do (cd ${p:-.} && ls); done | sort -u | less This can be adjusted to work in any sh-style shell: (IFS=:; for p in $PATH; do (cd ${p:-.} && ls); done) | sort -u | less (All the commands I list here assume that there are no “unusual” characters in command paths.)
How to display all the unix commands available on the system?
1,358,240,936,000
I have an executable program (no source code, just the compiled executable) that was made in windows (.exe extension). It doesn't use any graphics... it simply reads and writes files. I want to be able to run it in a linux shell script so that I don't have to switch operating systems to get my output. Is there a way to use or convert the executable for linux operating systems?
Wine works even for Windows CLI apps.
How do I run a windows executable in linux shell script?
1,358,240,936,000
Why do Unix-like systems execute a new process when calling a function rather than a dynamic library? Creating a new process is costly in terms of performance when compared to calling a dynamic library.
Unix-like systems don't "call functions by executing new processes". They (now) have shared libraries like pretty much all other relatively modern operating systems. Shells on the other hand do execute other processes to do certain tasks. But not all. They have build-in functions,implemented directly in the shell (or via shared libraries) for the most common and simple tasks (echo for instance is implemented as a built-in by a lot of shells). (The windows cmd shell is no different from Unix shells in this respect, BTW.) Creating a process in modern Unix-like systems is certainly more expensive than doing an in-process function call, but not by such a huge margin. Kernels are optimized for fast forking, using techniques like copy on write for address space management to speed up "cloning" of processes, and sharing the text (code) pages of dynamic libraries. If every executable on your machine that could be called from a shell script was implemented as a shared library, either: starting your shell would take a lot of time (and memory) just to load all that stuff up front (even with caching, the dynamic linker has non-trivial work to do, and libraries have data sections, not only text sections - we're talking hundreds if not thousands of libraries here) you would have to load each necessary library on-demand – possibly a bit faster than starting a process, but the advantage here is really thin. And the data part of your shared libraries becomes really hard to manage (the global state of your shell now depends on the state of a lot of unrelated code and data loaded in its address space). So you probably would not gain much for typical usage, and stability/complexity becomes more of an issue. Another thing is that the separate process model isolates each task very effectively (assuming virtual memory management & protection). In the "everything is a library" model, a bug in any utility library could pollute (i.e. corrupt) the entire shell. A bug in some random utility could kill your shell process completely. This is not the case for the multi-process model, the shell is shielded from that type of bug in the programs it runs. Something else: lower coupling. When I look at what's in my /usr/bin directory right now, I have: ELF 64bit executables, ELF 32bit executables, Perl scripts, Shell scripts (some of those run Java programs), Ruby scripts and Python scripts ... and I probably don't have the most fancy system out there. You simply can't mix the first two types in the same process. Having an interpreter in-process for all the other ones simply isn't practical. Even if you look only at your "native binary" file format, having the interface between the "utilities" being simple streams and exit codes makes things simpler. The only requirements on the utilities is to implement the operating system's ABI and system calls. You get (nearly) no dependency between the different utilities. That's either extremely hard, or plain impossible, for an in-process interface, unless you impose things like "everything must be compiled with version X of compiler Y, with such and such flags/settings. There are things for which in-process calls do make a lot of sense performance wise, and those are already, very often, done as built-ins by the shells. For the rest, the separate processes model works very effectively, and its flexibility is a great advantage.
Why do Unix-like systems execute a new process when calling a new function?
1,358,240,936,000
The executable file should be in machine code. So, will make system calls without a need for c libraries. But, I can not figure out what this image means? Is it just an abstraction method?
You have a program that calls a library function. In this case, it's the system standard library, also called “the C library” (but there are many other libraries that can be called from C code, this is just a name). “Library function” means that the code of the function is distributed as part of a library. There are two ways the program can invoke the library function when it runs. If the library is linked statically into the program, that means that when the program is built, the result is an executable file that includes both the result of compiling program's source code (the main function and any other function in the program), and the functions from the library such as printf (which the linker finds in a file called /lib/libc.a or some similar location¹). This means that the “Linker” step is fully performed when the program is built. All the “(Lib ref)” bits are replaced by code from the library. When the program runs, it doesn't need any library file. The code of printf is in the program executable. Since write is a system call and not a library function², its code is inside the kernel. If the library is linked dynamically, then the linker step in the picture doesn't include the library code in the executable. All it does is fill in some instructions to load certain functions from the library when the program starts: executable still contains “(Lib ref)” bits. When the executable file is executed, one of the first things it does is to load the shared library file (/lib/libc.so or some such¹) and match the function names required by the program with the function names offered by the library. The term “abstraction method” is rather vague. Don't fixate on it. You could say that dynamic linking abstracts the library, since the same executable could be run with different implementations of the library. The diagram seems to be explaining static linking. In real life, dynamic linking is most common on multiprogramming systems. Static linking has two major downsides: you can't upgrade the library (e.g. to fix a bug) without upgrading all the programs that use it, and if many programs use the same library then you have to store as many copies of the code. Static linking is fine for a low-end embedded system that only runs a single program and can only be upgraded by replacing the whole code image, but dynamic linking is the norm for systems that run many different programs. ¹ The file names are probably more complex than that on your system, but this is not relevant for this answer. ² Actually, there's a library function called write, but all it does is to make the system call. In my answer I'm referring to the system call by that name.
Do we need c libraries when running a program?
1,358,240,936,000
Normally my Linux OS allows me to create runnable executables (like a.out), but when I attempt to download an .exe from the Internet, it basically is permission restricted (neither user has execution (-x) rights). The problem is, when I change the file permissions with either chmod u+x or chmod 777, and I try to run the program, I always get this error message: run detectors: unable to find an interpreter for ./[file_name].exe where the [file_name] stands for the name of the file. Since my Linux experience and knowledge are very weak, and I did some research but haven't found anyone with this exact problem, any help would be highly appreciated! PS. My OS is Ubuntu 16.04.3 LTS 32-bit
This is totally normal. .exe files are Windows executables, and are not meant to be executed natively by any Linux system. However, there's a program called Wine which allows you to run .exe files by translating Windows API calls to calls your Linux kernel can understand. To run a .exe program you first need to install Wine. To do so you can follow the Official Wine installation tutorial for Ubuntu, or this AskUbuntu post. Then you need to open a terminal, go to the directory where you stored your .exe file and run wine your_file.exe. Some programs don't work properly, others don't work at all. To check whether a program will run properly under Wine or if it requires some tweaks, take a look at your program entry in the AppDB.
Cannot run .exe files
1,358,240,936,000
I have a program to list database files. It is called direkly from the shell like db filename to list the whole file, or like db 'filename :: conditions' to list only selected elements ... Another way is to call it with a file, wich contains all parameters. db < parameterfile The content is like (quite the same as the content in '' above): filename :: conditions Now I woud like do make such a file executable. So that i can call just ./parameterfile. To use a shebang #!/usr/bin/env db failed, because # is not a comment sign, I think. I got the error message db - Line 1 near ""#.//r" - " - syntax error shell returned 26 Is there a one-liner to do this?
You could write a shell script that removes the shebang line, if any, and passes the result to db. Place this script in a directory of your PATH. Otherwise you might have to specify the full path of the script in the shebang line. Use this script as the interpreter for your parameterfile. Example script runparam to remove the first line if it is a shebang line: #!/bin/sh awk 'FNR>1 || ! /^#!/' "$@" | db Example parameterfile: #!/usr/bin/env runparam filename :: conditions You can run it as ./parameterfile In this case the script can assume that there will always be a shebang line. You could also call the script directly with a parameterfile in the same way as you would call db, but this has no advantage. runparam parameterfile runparam parameterfile1 parameterfile2 [...] runparam < parameterfile If you will never call runparam with a file that doesn't contain a shebang line, you can use tail instead of awk to unconditionally remove the first line. This might be faster. #!/bin/sh tail -n+2 | db
Make STDIN executable with shebang
1,358,240,936,000
In Linux, I had created a userid. After creating this, I encountered a problem that the .EXE Files are not opened on simple click. They seem to be not privileged for my user account. How can I overcome from this?
Assuming these .exefiles were actually compiled for Linux (and your specific architecture) you need to ensure they have execute permissions: chmod +x your_file_names_here To make sure these files are actually meant to run on Linux, check the output of file one_file_name_here
Privileges on Linux?
1,358,240,936,000
Although one can easily make an executable file named ./42 or even ./カラオケ, I find no common packages that include letter-free commands. On Ubuntu, for example, apt-file find . | grep -i -e '/[^a-z/_\-]+$' reports that every executable file's basename has at least a letter or underscore or dash. The only letter-free files or dirs that I found were non-executable, and usually version numbers: docbook/dtd/xml/4.1.2, /usr/include/c++/4.8.2, etc. Do any published rules or standards constrain how commands may be named? Or is this merely cultural practice?
Counterexample: the [ command, the one you use when you write if [ "$foo" = bar ]. It's the same as test, except that it requires the final ] argument, and is a standard utility. Yes, it's an executable file: # ls -l "/usr/bin/[" -rwxr-xr-x 1 root root 51920 Mar 2 2017 /usr/bin/[
Why do all standard commands include an ASCII letter?
1,358,240,936,000
I have an executable i build on Ubuntu 16.04. The file size shown on the GUI and through the ls -l command is: -rwxrwxr-x 1 alibivmuser alibivmuser 19108760 dic 20 15:49 NreSpeechApplication And I know this means the actual size of the file is 19 MB. On my file system, the size is somehow similar (using ls -s): 18664 NreSpeechApplication I expected the size output of the size command in Linux would give something similar, but it shows as a total more or less 1.8 MB: text data bss dec hex filename 1806360 2416 4552 1813328 1bab50 NreSpeechApplication So my question is: why the two results are so different? And where do these additional MBs come from to form 18MB from 1.8?
size shows the traditional sections of an executable file. Notice that these traditional sections do not include debugging information, for example. Debugging information takes up a lot of space. For a complete list of sections, run objdump -h NreSpeechApplication The total (dec/hex) is not directly tied to filesize for another reason: it includes bss. This section is initialized to zero when loading the executable, therefore it is not included in the file. It's also possible the file is just abnormal, e.g. has random garbage added on the end :).
Linux `size` command gives different results from `ls`
1,358,240,936,000
I'm trying to run a cross compiler on my 64 bit Ubuntu. and it results in a following error: $ ./arm-none-eabi-gcc bash: ./arm-none-eabi-gcc: No such file or directory The file is here and contains some data: $ ls -la arm-none-eabi-gcc -rwxr-xr-x 2 alan alan 776368 Sep 26 19:36 arm-none-eabi-gcc $ head -n 1 arm-none-eabi-gcc ELFا4� 4 (44�4� TT�T���|� ldd shows there are no dependencies required: $ ldd arm-none-eabi-gcc not a dynamic executable strace also provides no additional info: $ strace ./arm-none-eabi-gcc execve("./arm-none-eabi-gcc", ["./arm-none-eabi-gcc"], [/* 80 vars */]) = -1 ENOENT (No such file or directory) write(2, "strace: exec: No such file or di"..., 40strace: exec: No such file or directory ) = 40 exit_group(1) = ? +++ exited with 1 +++ Finally I figure out that it's for a 32 bit system: $ file arm-none-eabi-gcc arm-none-eabi-gcc: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux.so.2, for GNU/Linux 2.6.8, stripped Question If the architecture of the binary is wrong why is the error so ambiguous? I would expect analogous to the below situation where I'm trying to execute a .JPG, where the binary execution makes no sense: $ ./DSC_0140.JPG bash: ./DSC_0140.JPG: cannot execute binary file: Exec format error
The error comes from the fact that you're missing the loader for the binary, /lib/ld-linux.so.2 (as indicated by file). Once you have that installed you'll be able to run ldd arm-none-eabi-gcc to see what's required in addition. The executable is in a valid format, which the kernel understands, so you don't get an "Exec format error", but when the kernel tries to run it, it can't find a required file — the loader —, hence "No such file or directory". As you figured out, a quick solution to get it running on the 64 bit machine is to run: sudo apt-get install lib32z1 lib32ncurses5 although a better solution in the long run is to use appropriate :i386 multiarch packages (which should be what gets pulled in by lib32 packages).
"No such file or directory" as error for wrong architecture [duplicate]
1,358,240,936,000
Is it possible to create what the Mac Finder specifies as a "Unix Executable File" and run it in Terminal like any other command instead of creating a .sh or .command file? If so how and where can I learn how to make them? If not, how can I create a .sh file that can take options? For example, with the ls command you can type ls -a to list invisible files as well as visible ones. How can make an executable that will only execute certain code with a -a option?
Well...all you have to do is basically to chmod your file to have executable permissions. For example if you want to create a shell script, you don't need to have .sh or whatever, you just have to write a text file and save it with any name you want, it doesn't even need to have an extension or you can make your own if you want like .bleh just for the fun of it. For a shell script you just have to write the header (#!/bin/sh) at the beginning of the file, write your code, save it, change permissions to executable and run it with ./yourFileName.bleh.
How to Create a Unix Executable File [closed]
1,358,240,936,000
I have read that if folder has only x set permission to execute it actually means that you are permitted to search this directory. So how to search it?
You misunderstand. “Search” permission is a bit of a misnomer; if you have execution permission but not read permission on a directory, you can access a file in this directory only if you know its name. That is, given a name, you can search the file with this name (and, more importantly, you can access the file that you find). You do that in the usual way, by accessing the file directoryname/filename. You can't browse the list of entries in the directory, so you can't make more advanced searches such as pattern matching. That would require the read permission; the read permission is precisely what lets you browse the list of entries in the directory. See also Execute vs Read bit. How do directory permissions in Linux work?
How to search folder with only x set (execution) permission
1,358,240,936,000
Is there a philosophy behind running a folder as an executable in linux? user@node main % ls -lash ./bin total 0 0 drwxrwxrwx 2 user staff 64B May 23 21:04 . 0 drwxr-xr-x 6 user staff 192B May 23 21:04 .. user@node main % ./bin zsh: permission denied: ./bin Permission denied implies that it may be allowed. If it's not, then why is it permission denied rather than something like can't run a directory? Or is it just a weird artifact of the API when directories are involved in this way? P.S. I am aware that x flag is adopted in the directory context to allow/deny cd-ing into them and long-listing (ls -l) them, this is not what this question is about. P.S.S. In Python, a directory can be treated as a python "executable" if it has a certain file structure inside. (I.e. It's possible to pass a directory instead of a python file to be run by the python interpreter).
Running a folder isn’t possible using Linux APIs. In particular, execve returns EACCES when an attempt is made to do so — this is what Zsh represents as “permission denied”, probably because that error can also be returned if execute permission is denied. The canonical error message for EACCES is “Permission denied”; execve uses it to cover a variety of errors, including any attempt to run a file which is not a regular file, which is what is happening here. Most shells behave like Zsh, but a couple handle this differently; for example, Bash outputs bash: ./bin: Is a directory Zsh can also be instructed to “run” a folder by changing to it, with the autocd option (setopt autocd). fish always changes to a folder if you try and run it.
Is "running a folder" possible in Linux?
1,358,240,936,000
Server is running FreeBSD 9.2. Using vim, I wrote the following script called hello: #!/bin/sh echo "hello world" Then I set it as executable: >chmod 755 hello Then I tried to run it from the command line (while in the same folder where the script was saved): >hello I got this error message: hello: Command not found. Is there something different I have to do to make an executable script in BSD?
You must type: ./hello If you type hello, the shell will try to find in $PATH any executable program named hello. In your case, you have not added your current folder to $PATH, so the shell can not find your program. Dot . in ./hello represent your current working directory, so the shell can expand it to /full/path/to/hello.
can't get hello world shell script to run in FreeBSD
1,358,240,936,000
I have two shell script files with exactly the same permissions etc. I can run one of them by just giving its name in the command line but for the other one I should use sh or ./ to run it. What is the reason?
Odds are that the 2 scripts are in different directories. One of the directories is on the PATH while the other is not. You can use the type command to test if a file is present on your current shell's $PATH. $ type start_dropbox.bash start_dropbox.bash is /home/saml/bin/start_dropbox.bash See this U&L Q&A "How do I test to see if an application exists in $PATH?" for a more thorough coverage.
Shell script execution
1,358,240,936,000
ELF 'Executable and Linkable Format' So if I generate Shared Object files .so are those considered ELF files?
Yes, if you generate them on linux for native use. You can see this via file: > file mylib.so mylib.so: ELF 64-bit LSB shared object [...]
Are .so files in Fedora considered ELF files?
1,358,240,936,000
Windows EXE files which are in the PE format have a header and it contains a checksum. Is it possible to verify it under Linux? Because I am looking for a Linux command I hope you understand that this is a Linux and not a Windows question (please don't close it).
There are a number of tools to do this; one such is pefile, a Python library with a build-in PE checksum verification function: #!/usr/bin/python3 import pefile import sys pe = pefile.PE(sys.argv[1]) if pe.verify_checksum(): print("PE checksum verified") else: print("PE checksum invalid") (error-handling left as an exercise for the reader). Save this as verifype, run chmod 755 verifype, then run it as ./verifype /path/to/pe.exe to check pe.exe’s checksum.
Is it possible to verify a Windows EXE (PE file format) checksum under Linux?
1,358,240,936,000
Very similar with Run a command in an interactive shell with ssh after sourcing .bashrc, yet the answer there doesn't work for me. I want to to execute remote command via ssh under the full interactive shell. I.e., run a remote command under the login shell, with some parameters. Basically, I need the following two cases to work: ssh user@remote_computer -t bash -l -c '/bin/echo PATH is $PATH' ssh user@remote_computer -t 'bash -l -c "java -version"' But currently I got a blank line from case 1, and java: command not found from case 2: $ ssh user@remote_computer -t bash -l -c '/bin/echo PATH is $PATH' Connection to remote_computer closed. $ ssh user@remote_computer -t bash -l -c 'true; /bin/echo PATH is $PATH' PATH is /usr/local/bin:/usr/bin:/bin:/usr/games Connection to remote_computer closed. ssh user@remote_computer -t bash -l -c 'true; java -version' bash: line 1: java: command not found Connection to remote_computer closed. UPDATE: At least two people think it is a quoting problem, but please can any one explain why it is a quoting problem, and how can I get what I wanted above. For e.g., this is also what I had tried: ssh user@remote_computer -t 'bash -l -c "java -version"' bash: line 1: java: command not found Connection to remote_computer closed. And I've run out of ideas how to quote it in a different way. Please help! If I run the same command after ssh user@remote_computer, I'll get: $ bash -l -c "java -version" openjdk version "11.0.15" 2022-04-19 LTS OpenJDK Runtime Environment Zulu11.56+19-CA (build 11.0.15+10-LTS) OpenJDK 64-Bit Server VM Zulu11.56+19-CA (build 11.0.15+10-LTS, mixed mode) So this is clearly not a quoting problem to me.
The shell -c command over SSH is problematic here; there are tricky quoting issues and it is not clear to me who is running what when. Here customjava is only known to something that reads ~/.bashrc for testing purposes. ZSH is my default shell. $ ssh -t localhost bash -c 'source ~/.bashrc;customjava -version' /home/jhqdoe/.bashrc: line 0: source: filename argument required source: usage: source filename [arguments] zsh:1: command not found: customjava Connection to 127.0.0.1 closed. $ ssh -t localhost bash -c ':;source ~/.bashrc;customjava -version' java version 99999 Connection to 127.0.0.1 closed. I guess you could strace things and ssh -v -v -v and probably source code dive to figure out exactly what is different between the above two commands (: is the null command and is less typing than true) but if things are already this fragile and hard to debug I would look for some other solution. My preference is usually to quote the whole command: $ ssh localhost 'bash -ic "customjava -version"' java version 99999 However this will probably become too complicated if there are more elaborate quoting needs and variable substitutions involved in the commands. (Complicated shell quoting is not the sort of thing I want to debug at 2AM in the morning, so I tend to avoid it by default.) Instead Pipe Another method is to pipe the commands to the required shell; this minimizes the complexity on the command to run to just a shell invocation that is probably optional: $ printf 'customjava -version'"\n" | ssh localhost 'bash -i' bash$ customjava -version java version 99999 bash$ exit $ printf 'customjava -version'"\n" | ssh localhost 'bash -l' java version 99999 The downside here is that standard input is not a terminal, so if a command on the other end really needs a terminal, this will not work. There may be warnings about this. $ printf 'customjava -version'"\n" | ssh localhost Pseudo-terminal will not be allocated because stdin is not a terminal. zsh: command not found: customjava $ printf 'customjava -version'"\n" | ssh -t localhost Pseudo-terminal will not be allocated because stdin is not a terminal. zsh: command not found: customjava Fake A Terminal If a terminal is required I might switch to expect; this creates a fake terminal that the commands will be run in: #!/usr/bin/env expect spawn -noecho ssh localhost # assume a prompt containing at least "% " (ZSH) expect "% " # replace ZSH with another shell send -- "exec bash\n" # assume a prompt of at least "$ " expect "$ " send -- "customjava -version\n" expect "$ " set version_info $expect_out(buffer) send -- "exit\n" puts "got >>>$version_info<<<" but this has other problems, notably error checking and issues detecting the shell prompt (which someone might fiddle with, for example, so the first thing to do might be to set the prompt to some known value for expect to match on). It may also fall apart if someone breaks the interactive shell configuration in any of innumerable ways. Maybe run /bin/sh and hope that is not bash? But then you may need to configure sh for the custom java. This leads to... Remove the Configuration from the Shell Yet another way to solve this would be to write a special command on the SSH server, probably an exec wrapper, that correctly configures the environment and then runs the java or whatever command. Then you could run setup-our-env bash (an interactive shell for the humans) or setup-our-env java -version (an easy command to run over SSH without the complication of an interactive shell). An exec wrapper could be as simple as: #!/bin/sh PATH=/custom/java/bin:$PATH exec "$@" In other words, the environment settings for the custom java version would not be all mixed up with the interactive shell configuration and thus could be applied to any required command.
ssh to execute remote command under interactive shell
1,358,240,936,000
I'm not talking about a Windows EXE. I mean an actual Linux executable file. At first, I've been wondering whether Linux executables even have embedded icons, and it seems as though they do, because there are programs for which I've never found the icons on my system (I might just be blind, and they were actually hidden somewhere I've never fell upon.) If this is possible, it would also be great if there were a way of specifying that I want the embedded icon of this or that executable in the "Icon=" statement in my desktop launchers (.desktop) without actually extracting it to a separate file. I'm running Debian GNU/Linux Bullseye Stable.
A program's icons can be found inside their package source files (not in the executable binary). Icons for programs that have been installed should be in /usr/share/icons.
How to Extract Icons from Executable File in Linux
1,544,728,212,000
I have a library - users are to create executable files, potentially with a hashbang to tell exec which executable to use. If they omit the hashbang, then I think most systems default to /bin/sh, I want to change that default. So some files might start with: #!/usr/bin/env foobar other files might start with: #!/usr/bin/env perl or #!/usr/bin/env ruby And in some cases, the user will omit the hashbang altogether. In that case, I will still execute the file directly, and I want to default to using the foobar executable. In other words, I don't know what the hashbang will be in advance, and I want the default to be foobar instead of the default being /bin/sh, if there is no hashbang present. I think the way to do what I am doing is to create an executable that can run exec first, and if that fails with a particular error message, then run the script indirectly with the foobar executable? Something like this: function run { stdio=$("$1" 2>&1) if [[ stdout/stderr matches a certain error message ]]; then foobar $1 fi } My question is - does anyone know what I am talking about - and does anyone know how I can default to a particular executable if no hashbang is present? Not that it's in theory less convenient to check if there is a hashbang, and more in theory more convenient just to run it?
You can implement this in two steps using execve: ask the C library to ask the kernel to execute the target, and if that fails, try again with foobar. This is actually how shells commonly implement execution. There are other exec family functions which will run shebang-less scripts with /bin/sh, but execve won’t. What exactly happens when I execute a file in my shell? has a lot more detail on this topic.
Change the default executable for file with potentially missing shebang
1,544,728,212,000
I had a C program and made it executable on my 32 linux mint. For assignment purposes I had to test if it was working on university pool computers. I honestly don't know which linux distributions are installed there, just had two minutes didn't really take a look but I know that it's also 32 bit system. So when I tried to run it in terminal (./program), I got bash permission denied error, which I know means that the file is not executable So I ran chmod u+x program command again to make it executable and then it worked, my program was working just fine as on my laptop. Does anybody know what can be the reason for that? I mean, obviously, my file is executable, at least on my linux mint, what can be the reason that it is not on some other linux distribution? Maybe I have to make it executable in another way? I only know the one mentioned earlier chmod u+x program. UPDATE: as mentioned in the comments the way I transfered my file to university computer was: download it from google drive. Now I tested on my laptop but to another system (UBUNTU), I tried again downloading from google drive the single file and the problem was same: not executable. Then I tar-ed the file (as Richard suggested) and after extracting it file was executable right away, so this leads me to conclusion that if I tar it, it should also be executable to any other system , in this case my university computer.
Because you had not done chmod u+x, non unix files systems will not store this data, it is outside of the file: the execute bit was not copied to google-drive. Therefore you had to run chmod again. On the machine that you compiled it you did not have to run chmod, as the compiler does this for you. As long as you keep it within the Unix eco-system, the x bit will remain. However google-drive is not Unix (though it runs on Unix). tar is a program that can wrap us a load of files/directories into a single file, along with all of there meta-data.
executable file problem after transfer to other system
1,544,728,212,000
I downloaded several pre-built binaries of the same program (nodejs-linux, -x86, -x86_64). In different shells I get a similar error that no such file or directory: node. The $PATH is correct and the binaries exist and are executable. Is this because I'm on a musl-based linux distribution and the binaries use glibc? I thought the programs would crash or exit non-zero in such a case. Note: Both @DepressionDaniel and @xhienne gave correct answers below.
If your libairies dont match the dynamic librairies required by the executable, it won't even start. To check the dynamic librairies this executable is linked to, do: ldd /path/to/executable If you see => not found, you know what is missing.
Various shells won't run a binary that exists
1,544,728,212,000
I'm in dual boot with Windows, and i've created a shared ntfs partition. I' ve cloned a project from github, use make to compile it but seems it isn't recognized as runnable. I've added the right permission and tried to change the owner of the directory. This is the output of ls -l: total 298 -rwxrw-rw- 1 federicop federicop 375 ago 13 00:37 CLOSE.c -rwxrw-rw- 1 federicop federicop 1015 ago 13 00:37 CommandsHandler.c -rwxrw-rw- 1 federicop federicop 296 ago 13 00:37 CONFIG -rwxrw-rw- 1 federicop federicop 5483 ago 13 00:37 Config.c -rwxrw-rw- 1 federicop federicop 430080 ago 13 00:37 core -rwxrw-rw- 1 federicop federicop 886 ago 13 00:37 Error.c -rwxrw-rw- 1 federicop federicop 1774 ago 13 00:37 Heartbeating.c drwxrw-rw- 1 federicop federicop 4096 ago 13 00:37 inc -rwxrw-rw- 1 federicop federicop 346 ago 13 00:37 makefile -rwxrw-rw- 1 federicop federicop 5530 ago 13 00:37 OPE.c -rwxrw-rw- 1 federicop federicop 0 ago 13 00:37 output.txt -rwxrw-rw- 1 federicop federicop 3157 ago 13 00:37 READ.c -rwxrw-rw- 1 federicop federicop 37 ago 13 00:37 Run.sh -rwxr-xr-x 1 federicop federicop 47486 ago 13 08:21 Server -rwxrw-rw- 1 federicop federicop 3323 ago 13 00:37 server.c -rwxrw-rw- 1 federicop federicop 7218 ago 13 00:37 StruttureDati.c drwxrw-rw- 1 federicop federicop 0 ago 13 00:37 TestDIR -rwxrw-rw- 1 federicop federicop 2186 ago 13 00:37 Utils.c I need to run Server, and my user is federicop. This directory is in /media/federicop/Data and i have this line in my fstab: UUID=82440D36440D2F0B /media/federicop/Data ntfs-3g auto,users,permissions 0 0 If i try to run it i get an error: ./Server bash: ./Server: Permission denied The code works in another machine. Also I think is worth mentioning that my files are listed with another color:
Probably your NTFS volume is mounted with option noexec, which is the default enforced by permissions. See man ntfs-3g for details. You could selectively enable exec option by adding it to fstab. UUID=82440D36440D2F0B /media/federicop/Data ntfs-3g auto,users,permissions,exec 0 0 Run grep /media/federicop/Data /proc/mounts to know mount options actually applied.
Can't run c program in other partition
1,544,728,212,000
I want to add a program to my $PATH, but its code is split into various files that it imports at run-time from a lib/ in its root directory. projectRootDirectory ├ programBinary └ lib ├ someLibrary └ someLibrary2 How do I add such a program to my $PATH without it complaining about missing dependencies? I'd normally get the binary into /usr/local/bin by copying cp /path/to/programBinary /usr/local/bin or symlinking cd /usr/local/bin ln -s /path/to/programBinary programBinary but both make it fail to find its dependencies. I can't move the whole directory into /usr/local/bin because some of the required files are executables too, which I don't want cluttering my $PATH. How should I be doing this?
You can of course add projectRootDirectory to your $PATH, but this has at least two drawbacks: It looks like, the way you are describing it, this particular project does not organize its project nicely into bin and lib subdirectories like so: projectRootDirectory ├ bin │ └ programBinary └ lib ├ someLibrary └ someLibrary2 therefore you'd be forced to put projectRootDirectory itself into the $PATH, and since that contains other things besides binaries intended for execution, it's a bit ugly. If you have many similar projects, the contents of your $PATH will proliferate out of control. Instead, the simplest thing you can probably do in this particular case is to place a wrapper executable in /usr/local/bin, which is a very simple shell script that just runs the "real" program from the location where it lives. #!/bin/sh exec projectRootDirectory/programBinary "$@" Since the wrapper script is calling it with its full pathname, it will probably be able to locate its auxiliary files in the manner it normally does.
How do I add a program with a directory of dependencies to $PATH?
1,544,728,212,000
I have php shell and I have function that do just that (but it give me error when I call scandir($path) function because of restriction in php), so I need to use my bash shell to do the same. It can be one command that list all 3 things separated by some delimiters or they can be 3 commands. What is the best way to do that? The only thing I can think of is find executed 3 times with -maxdepth 1 and some options.
With GNU find, you could call: find /some/dir -mindepth 1 -maxdepth 1 -type f \ \( -executable -printf 'X%p\0' -o -printf 'F%p\0' \) -o \ -type d -printf 'D%p\0' The output will be a NUL-delimited (NUL is the only character that may not appear in a file path) list of records, the first letter of which identifies the type (X, F, D for executable regular files, other regular files, directories). For symlinks, if you want to consider the type of the target of the symlink instead, use -xtype instead of -type above. -executable returns files that are executable by the process that runs that find command. Other types of files (fifo, socket, doors, devices...) are ignored. The . and .. directory entries are also ignored.
List files, directories and executables in current directory
1,544,728,212,000
So I compiled FFmpeg from this guide as a standard user and it works fine as the user I compiled with but if i do sudo ffmpeg the program can't be found. Is it possible to make it accessible by root or do I need to rebuild logged in as root?
The issue here is that ffmpeg has not been placed in a directory that is in root's $PATH. The guide you linked to (in the future please include the steps here so we don't need to go looking for them) tells you to run this command: ./configure --prefix="$HOME/ffmpeg_build" --bindir="$HOME/bin" This will cause ffmpeg's files to be installed into $HOME/ffmpeg_build and the compiled executable into $HOME/bin (in the make install step). If you can run it as your normal user, that means that you have modified your $PATH and have added that directory to it. For root to run it you can either add /home/your_user/bin to roots path or, much better, just call sudo and give the path to the executable: sudo ~/bin/ffmpeg
Compiled, can't access with sudo
1,544,728,212,000
I am looking to download modelsim for ubuntu. But the site gives only .exe file. Can I still install the software? Does modelsim exist for ubuntu? If yes, where can I find it? Thank you.
Some more information in the future would be nice. It would for example be good to know what you exactly tried to download. If you look at the website again, you will see that ModelSim PE is only available for Windows. For a linux version you will need to download ModelSim DE. http://model.com/content/modelsim-de-simulation-and-verification?quicktabs_1=1#quicktabs-1
Modelsim for ubuntu
1,544,728,212,000
I wanted to set gsettings as /usr/bin/gsettings so I created an alias. But I am not sure if that works: $ type gsettings gsettings is aliased to `/usr/bin/gsettings' gsettings is /home/linuxbrew/.linuxbrew/bin/gsettings gsettings is /usr/bin/gsettings $ which gsettings /home/linuxbrew/.linuxbrew/bin/gsettings Also another example: $ type pandoc pandoc is aliased to `/usr/bin/pandoc' pandoc is /home/linuxbrew/.linuxbrew/bin/pandoc pandoc is /usr/bin/pandoc pandoc is /home/nikhil/.cabal/bin/pandoc $ which pandoc /home/linuxbrew/.linuxbrew/bin/pandoc Question Can someone please clarify which binary for pandoc and gsettings would get executed when I type pandoc and gsettings on bash? Does the order of output of type command has some significance? Note $ type type type is a function type () { builtin type -a "$@" } type is a shell builtin
Yes, the order is important: whichever one is first in the output of type is the one that will be executed. So, in your case, pandoc would run the alias, /usr/bin/pandoc, and gsettings would run /usr/bin/gsettings. I can't actually find where this behavior is documented, where it is stated that the first result of type -a is the one that will be executed, but you can see it in action if you unset and then reset an alias, for example: $ type -a ls ls is aliased to `ls --color=tty' ls is /sbin/ls ls is /usr/bin/ls $ unalias ls $ type -a ls ls is /sbin/ls ls is /usr/bin/ls $ alias ls='ls --color=tty' $ type -a ls ls is aliased to `ls --color=tty' ls is /sbin/ls ls is /usr/bin/ls As you can see, the alias goes back to the beginning when it is re-added. Compare to: $ touch ~/bin/ls; chmod 755 ~/bin/ls $ type -a ls ls is aliased to `ls --color=tty' ls is /sbin/ls ls is /home/terdon/bin/ls ls is /usr/bin/ls The new fake command I added, ~/bin/ls, is shown after the alias (aliases always take precedence), after /sbin/ls and before /usr/bin/ls. This is precisely the order of execution as you can see by checking the order of the directories in my $PATH: $ echo "$PATH" /sbin:/usr/sbin:/home/terdon/bin:/usr/local/bin:/usr/local/sbin:/usr/bin Note how /home/terdon/bin is after /sbin and before /usr/bin, and how this order is reflected in the output of type. Finally, the simplest way to know which one will be executed is to run type without -a: $ type ls ls is aliased to `ls --color=tty' That always returns just one item and that is the one that will be executed when you use that command.
Which binary would be run when we have multiple installations?
1,544,728,212,000
I recently began messing with Desktop Entries in order to run some scripts at Gnome startup. I've read through some freedesktop documentation, as well as this post on creating a startup script. I currently have a desktop entry working on startup, but it is not behaving in the way I was made to understand it should. Some system info: this is CentOS7 running on VirtualBox in Windows This is my desktop entry: [Desktop Entry] Name=fixres GenericName=Resolution Fixer Comment=Changes resolution to 1920x1080 Exec=bash /home/detroitwilly/scripts/fixres.sh Terminal=false Type=Application X-GNOME-Autostart-enabled=true The script being executed uses xrandr to add a new resolution mode and apply it to my virtual display. Now, the first line in the script has the shebang #!/bin/bash. My understanding is that if the shebang is on the first line of the script, I shouldn't need to specify bash in the Exec= line of the desktop entry. Note that if I remove bash from the Exec= line, the application will not run. I've also verified that /bin is in my $PATH variable, so I should automatically have access to bash. Any ideas as to why i need to prepend the path to my script with bash? Thanks!
There are two portions required to execute a BASH script directly: the shebang the executable bit The shebang should look as follows. #!/usr/bin/env bash (Using env is a best practice. It can also be written as the full path, e.g. #!/bin/bash.) Then set the executable bit as follows. chmod +x /home/detroitwilly/scripts/fixres.sh
Desktop Entry Requires "bash" in Exec Value
1,544,728,212,000
I tried upgrading pip3 using su -c 'pip3 install' --upgrade pip' because I got errors and it failed when trying to upgrade it as a normal user. This removed the pre-installed pip from /usr/bin and dumped it in /tmp, replacing it with a system wide installation of pip which is only accessible by root. I haven't tried to uninstall this new pip because I suspect it would lead to more problems. Since I still have the old pre-installed pip in /tmp, how do I get back the pre-installed pip using this executable that is still in /tmp? Location of pip in /tmp: /tmp/pip-ufkfr3th-uninstall └── usr └── bin └── pip
It's likely that this was the package manager's version of pip, I'd simply re-install using your package manager. Fedora/CentOS $ sudo yum reinstall python-pip Debian/Ubuntu $ sudo apt-get --reinstall install -y python-pip
Is there a way to restore an uninstalled executable from bin?
1,544,728,212,000
doing this in bash script ./Execute_program > MyOutput I get a log file from the output, however this causes the output not to be displayed on the terminal screen. Is there any way to do the same but at the same time the output can be displayed on the screen?
Use the universal pipe fitting, tee. tee reads input, and duplicates the output to both standard output and the specified file: ./Execute_program | tee MyOutput If you want to append to rather than overwrite the specified file, use -a: ./Execute_program | tee -a MyOutput If you want to write to several files, just add them as additional parameters: ./Execute_program | tee MyOutput MyOtherSavedLog
How to run a program, redirect its output and display the output on the screen? [duplicate]
1,544,728,212,000
I have build tools version 25.0.1 installed on a machine running Enterprise Red Hat Linux, 64 bit. When I try and run the aapt command, from the command line I get the following: -bash: ./aapt: cannot execute binary file From researching it looks like the issue is that the aapt executable was compiled for 32 bit. I have tried many suggestions out there to install via yum libs to allow 32 bit executables to run, but none have let aapt run. Here is the output from file ./aapt ./aapt: Mach-O 64-bit executable Here is the output from the uname command 3.10.0-514.el7.x86_64 #1 SMP Wed Oct 19 11:24:13 EDT 2016 x86_64 x86_64 x86_64 GNU/Linux Any help would be appreciated!
./appt is not a ELF formatted executable, it's a Mach-O executable. This format is used on macOS, so evidently you have a macOS executable, not a Linux executable. What does sudo rpm -q --file ./appt output? You might what to see if there is a version of ./appt available to your system by executing: sudo yum --enablerepos=* provides '*/appt'
Android build tool command aapt “cannot execute binary file”
1,544,728,212,000
I am facing a little issue: I have created an FTP server with vsftp and a web server with Apache 2.2. Now, my goal is to make so that anyone can log into the machine via FTP and upload files (.html, .php) so that they are executable by apache. The point is that they aren't. In fact, the files get created with 600 privileges, and with owner "ftpadmin". Apache returns an error. Do you know a quick way to fix this?
Apache process started by the user www-data (in Ubuntu, check for Debian). Those files are created by ftp user. Owened by ftpadmin and have permissions read and write to owner only (group members and others cannot access). For currently uploaded files Add read and execute permissions to the other users sudo chmod o+rx *.php sudo chmod o+rx *.html (OR) Change the group of the files to www-data and add read and execute permissions to the group users sudo chgrp www-data *.php sudo chgrp www-data *.html sudo chmod g+rx *.php sudo chmod g+rx *.html Refer the below link to set default file permissions for future file uploads. How to set default file permissions for all folders/files in a directory?
FTP user creates file that are not executable by apache
1,544,728,212,000
I have a binary file that needs to run at startup on all accounts (including unprivileged user accounts), so a command to run it will be put into /etc/rc.local. The program itself will have only execute permissions so that it cannot be read or modified by an unprivileged user. It is located in /usr/bin. However, it needs to access a secret key when it runs (key is in /usr/share). Is it possible to create a file containing the secret key that will not be readable or writable to all common users, but readable by the program? Could the file take advantage of the program being privileged? Perhaps it could have some sort of setup with the file permissions (chmod)? Or is there a way that that it should be encrypted in some way?
If it, as you say, "needs to run at startup on an unpriveleged user account", then it will necessarily have access to all files that the unpriveleged user account in question has access to. You could create a dedicated unpriveleged user account for the purpose of running the script. Set the permissions on the secret key file so that only that dedicated user account can read it. But it sounds like you need to run the program under a specific pre-existing user account so that might not work for you. There are other solutions, such as running it in a chroot() that has access to the secret key, but whether or not that's viable depends on that it does and what else, exactly, besides the secret key file, it needs access to. You will not need to use sudo in any case because /etc/rc.local runs as root so you can su directly to whichever account you ultimately choose to run the program under. EDIT after clarification of question: It needs to execute every time a user of any sort logs in. I see. This is quite different from running it once only at startup using /etc/rc.local as you originally stated! Your best bet in this case will probably be to try to embed the secret key in the binary instead of accessing it as an external file, have the binary owned by root and executable but not readable by other users (permissions such as rwx--x--x. The users will not be able to get access to the key (unless they compromise root on the system) but they can run the binary. If you cannot embed the secret key in the binary then you can make the binary setuid to some user that can access the secret key... but take all the care that goes with writing setuid binaries.
File that is only readable with root privileges
1,544,728,212,000
This question, "Figuring out what OS is on which partition?", got me into thinking if you could run the executables from a mounted distro without actually having booted into said distro. The answer to this question, "How to recover from a chmod -R 000 /bin?", got me into thinking that perhaps you could call the loader from the mounted distro's partition. Is this possible? Example When I tried it, it didn't work, but I'm thinking I might be missing some other bits. The command I was trying this with was lsb_release. I'm on a 64-bit Fedora 14 installation at the moment, if that matters. $ sudo /lib/ld-2.13.so /usr/bin/lsb_release /usr/bin/lsb_release: error while loading shared libraries: /usr/bin/lsb_release: invalid ELF header
Yes, provided that the running kernel is capable of running the binaries from the mounted distribution. This requires that the mounted distribution is for the running processor architecture, or a compatible one. You aren't going to be able to run ARM binaries on an x86 processor, for example. Compatibility depends on the CPU; for example, on x86/amd64, 64-bit binaries run only on 64-bit CPUs while 32-bit binaries run on both 32-bit and 64-bit CPUs. Compatibility also depends on the operating system; for example, on x86_64 CPUs, Solaris can run indifferently 64-bit and 32-bit programs on 32-bit and 64-bit kernels; Linux 64-bit kernels can run 32-bit programs but not vice versa; while OpenBSD 64-bit kernels cannot run 32-bit programs. Statically-linked executables will run in place with no effort, provided that they aren't looking for files in fixed locations. Dynamically-linked executables may not work if the mounted distribution has a more recent version of the C library, or is using a different C library (e.g. uClibc vs. Glibc), or is using a different architecture for which the host has no userland support (e.g. i386 vs amd64, armhf vs armel). Sometimes, to make dynamically-linked executables work, you'll need to both call the dynamic linker explicitly, and put the library directories of the mounted system first on the library search path. LD_LIBRARY_PATH=/mnt/lib:/mnt/usr/lib /mnt/lib/ld-linux.so.2 /mnt/bin/foo An easy way to be sure that the program from the mounted system will find everything it needs in the right place (loader, libraries, configuration files, data files, etc.) is to run it in a chroot. A chroot restricts the view of the filesystem to a single directory and its subdirectories. Only root can call the chroot command. chroot /mnt /bin/foo Since the program is running with /mnt as its root directory, it won't see anything outside that hierarchy: no /home (or rather, the one from /mnt), no /proc, only the static default for /dev, etc. The special filesystems such as /proc can be mounted in the chroot, from the outside (mount -t proc proc /mnt/proc) or from the inside (mount -t proc proc /proc). Under Linux, directories can be re-mounted in a second location (while remaining wherever they were already) with mount --bind, or with mount --rbind to also replicate filesystems mounted underneath the specified directory. mount --rbind /dev /mnt/dev mount --bind /home /mnt/home mount --bind /proc /mnt/proc mount --bind /sys /mnt/sys chroot /mnt /bin/foo Debian and some other distributions provide a tool called schroot to automate such mounts and perform other niceties. It's overkill for a one-off thing, but convenient if you want to maintain multiple distributions.
Can executables from a mounted distro's disk be run without booting into it?
1,544,728,212,000
How should I locate all 32bit programs on my system ? I'm running a 64bit OS. (There might be some , but I forget)
This is kinda crude, but should do the trick find / -mount -type f -perm /111 -exec sh -c 'objdump -f {} | grep -q elf32 && echo {}' \; -mount keeps us on the / filesystem -type f restricts it to files only -perm /111 restricts it to files with the executable bit then we run objdump -f on the file and echo the file name if objdump contains elf32 The first 3 filters are just so we narrow the results a bit and arent running objdump on every single thing.
Running 64bit OS , Find all 32bit programs on a system
1,544,728,212,000
I am trying to run an executable on Angstrom Linux, but ash tells me -sh: ./myEx: not found I've checked with readelf the program interpreter and it is root@beagleboard:~# readelf -l myEx | grep interpreter [Requesting program interpreter: /lib/ld-uClibc.so.0] This program interpreter is missing. I've tried to symbolic link ld-linux.so.3 to ld-uClibc.so.0 but I think it's not correct and with no good results. I don't know where to install that or if I have to cross compile it from sources.
I figured out which libc my system was using. In my case it was eglibc that, cross compiling with openembedded for Angstrom 2012.05, is the default choice. Cross compiling for eglibc resolves this issue. I wrote this next part only for reference, because I asked bitbake mailing list and I didn't found anything about this on Google: to cross compile for uclibc set ANGSTROMLIBC = "uclibc" in a conf file (as stated on this faq). uclibc should not be compiled directly but it will be built when you run bitbake recipe on a source, and packaged under /tmp/deploy/ subdirectories, usually in the same directory of your package.
ld-uClibc.so missing
1,544,728,212,000
I am curious, in the following scenario, what kind of permission does a shell script or Java program has (owner/group/other)? There is a script called run.sh, and it in turn calls a Java program a.java. The script and the java is owned by user A and have -rwxrw-r-- permissions. When they were run by user A, which permission group do they belong to? Do they get the permission from user A as a owner? And there is another user B, who is in the same group with user A. He execute run.sh and in turn calls the Java program. Now what permission group do they belong to? Do they get the permission from user B as a group? Maybe the program will try to write on a directory /common/abc which have a permission of drwxrw-r--, if the program have a permission of "other", it will fail. A point to notice, is that they both use the expression sh run.sh to run the script, so they don't need the execute permission. Does it only require the read permission?
All processes start executing as the same user and group(s) as the process that call them, unless they are called through a setuid or setgid executable, i.e. one that has rws or r-s or -ws or --s in its permissions. The permission bits other than the s bit are irrelevant. The owner of the executable only matter if the script is setuid (i.e. has the s bit set in the user column), and the owning group only matters if the script is setgid (i.e. has the s bit set in the group column. Therefore, in the scenario you describe, when user A runs the script, the script and the Java program are both executed as user A, and as whatever group(s) A belongs to. When they are executed by user B, they are executed as user B and as whatever group(s) user B belongs to. Running a native executable requires execution permission and nothing more. Running a script (a program starting with #!) requires both execution permission (to start the execution, before the #! mechanism takes over) and read permission (for the interpreter to read the script). If you invoke the interpreter explicitly, the script doesn't need to be executable: as far as the system is concerned, it's just another data file.
permission and user/group of a program executed through a script
1,544,728,212,000
I have installed the newest Erlang from source. As the final step I have executed sudo make install Among other things, it placed erl link in /usr/local/bin, but its permissions are insufficient for me to use, other than with sudo lrwxr-x--- 1 root wheel 21B Apr 19 22:26 erl@ /usr/local/bin permissions: drwxr-xr-x 18 root wheel 612B Apr 20 21:45 bin/ sudo gives enough permissions to execute, but not enough to change the permissions. The question is, how do I change the permissions on these symbolic links?
Are you using chmod's -h option (from the man page: "-h If the file is a symbolic link, change the mode of the link itself rather than the file that the link points to")? I tried it, and it seemed to do the job: sudo chmod -h o+rx erl
No exec permissions on programs in /usr/local/bin
1,544,728,212,000
I have a laptop with both a discrete and an onboard graphics card. I want to run a Game executable file using the discrete GPU but instead it runs on the obnboard one. How can I run it with the discrete GPU? The game is not installed. It is a folder where I run the executable FILE. OS: Pop! OS (Gnome) CPU: amd ryzen 5 4000series GPU: NVIDIA GTX 1650 game: cities:skyline RAM: 16 gigs File explorer - Nautilus I get this option for installed apps but how can I get it for executable files? Tried to make the Desktop application File: location - /usr/share/applications/Cities.desktop - and double clicking it opens up the file explorer [Desktop Entry] Encoding=UTF-8 Version=1.0 Type=Application Terminal=false Exec="/home/{username}/Games/linux games/Cities - Skylines Collection/Cities.x64" Name=Cities:Skyline Icon="/home/{username}/Games/linux games/Cities - Skylines Collection/LauncherAssets/game-logo.png" __GLX_VENDOR_LIBRARY_NAME=nvidia __NV_PRIME_RENDER_OFFLOAD=1 __VK_LAYER_NV_optimus=NVIDIA_only
For AMD or Intel GPUs, setting the environment variable DRI_PRIME=1 should do the job. For nVidia GPUs, you additionally need __GLX_VENDOR_LIBRARY_NAME=nvidia, __NV_PRIME_RENDER_OFFLOAD=1, and __VK_LAYER_NV_optimus=NVIDIA_only. (The "Launch using Discrete graphics card" menu option internally uses the switcheroo-control service and I got these from its source code; I'm not 100% sure whether all of them are still needed today.) I think there's a prime-run tool for nVidia but I don't actually know if it does anything beyond the above. To include these in your .desktop file, you need something like (note the two sets of quotes, double on the outside and single around the path): [Desktop Entry] Encoding=UTF-8 Version=1.0 Type=Application Terminal=false Exec=sh -c "__GLX_VENDOR_LIBRARY_NAME=nvidia __NV_PRIME_RENDER_OFFLOAD=1 __VK_LAYER_NV_optimus=NVIDIA_only '/home/{username}/Games/linux games/Cities - Skylines Collection/Cities.x64'" Name=Cities:Skyline Icon="/home/{username}/Games/linux games/Cities - Skylines Collection/LauncherAssets/game-logo.png"
How can I run an executable and tell it to use my discrete NVIDIA GPU instead of the onboard one?
1,544,728,212,000
I have build a server that ships in executable format, and I want to register it as a service that I can start/stop/restart. I have read the following questions and many others like it > How can I make an executable run as a service? My issue issue is that I want the executable process to be stopped by systemctl if I run sudo systemctl stop myexecutableservice. In my current case if I stop the service, the background process is still running and my server is still accepting requests.
Here's an answer for the simplest case. All you need to do is create /etc/systemd/system/myservice.service. This is an INI file with a [Service] section. A very simple service looks like this: [Service] ExecStart=/usr/bin/myapp arg1 arg2 Simply ensure this service exists, then sudo systemctl start myservice or sudo systemctl stop myservice. There are lots of other things you can do in the Service section such as defining user, working directory, environment variables, how to manage forking processes, how to stop the process, priority, etc. See systemd.service(5) for details of how this section works. You can also add a [Unit] section which can be used to define a Description= that will appear in logs, and can define relationships to other units. See systemd.unit(5). An [Install] section can also be declared. That will let you define what happens when systemctl enable myservice is used. See systemd.unit(5).
Creating a systemd service in ubuntu that can be started/stopped
1,544,728,212,000
I have encountered a package which installs its binaries with permission 555 instead of usual 755 in /usr/bin, i.e. prohibiting writing for everyone. I do not understand the reason for doing so... Can assume that they want to add extra security, but not sure. My question is as follows: can having permissions 555 for binaries in /usr/bin lead to any problems with such a binary?
If the file is owned by the root user, permissions normally don't matter, the root user will be able to do anything with the file regardless. If the file is owned by a non-root user, then 555 could be pertinent to prevent the owner of the file rewriting the file (which could allow to e.g. embed malware or run some code).
is there a reason to revoke write permission for executables in /usr/bin
1,544,728,212,000
I noticed that most of my image files are green when the ls command is used but some of them look purple. Apparently .png and .jpg files are supposed to look Magenta while executable files are Green. But when making a Beamer presentation only the 'executable images' work. Why are my PNGs executable? I am using Ubuntu on the WSL. I get the following error on non-executable images. Would this be a Latex error? ~/LatexFiles/images/LSTM-falsepositive-DRTHIS.png: Permission denied LaTeX Warning: File `LSTM-falsepositive-DRTHIS.png' not found on input line 32. ! Package pdftex.def Error: File `LSTM-falsepositive-DRTHIS.png' not found: usi ng draft setting. The stat command gives this output: File: images/LSTM-falsepositive-DRTHIS.png Size: 90023 Blocks: 176 IO Block: 4096 regular file Device: 2h/2d Inode: 5348024557507288 Links: 1 Access: (0000/----------) Uid: ( 1000/ marcus) Gid: ( 1000/ marcus) Access: 2020-02-25 10:56:13.114775800 +0000 Modify: 2020-02-25 10:56:13.114775800 +0000 Change: 2020-02-25 11:10:56.091436300 +0000 Birth: -
Any file can be tagged as executable. chmod 644 filename.jpg will remove the execute. This will give rw to owner, read to everyone else and remove execute. Make sure that is what you want. Added from good comment below: use chmod -x filename.jpg it will clear the execute bits without changing other settings.
Why are my PNGs executable?
1,544,728,212,000
I see there is a question similar to this but that answer relates to Git, which I am not using here. I often make small scripts that I send to others with very limited command-line skills to run. Is there a way to package my script so that users don't need to change permissions for my executable? I tried to package my script such that the first executable script changes the permissions for all the others, but at this point I have been unable to find a way to ship that first script in a way such that the user does not have to give it execute permission i.e. chmod +x First_script Am I running into this wall because there is no solution?
A simple way is to make a tarball, a compressed tar archive. It you create it with root privileges and the user extracts the content also with root access, the permissions should be preserved. Examples sudo -cvzf filename.tar.gz directory # create a compressed tar archive of the directory and its content cd /to-where-you-want-it-extracted sudo -xvzf filename.tar.gz # extract the content from the archive There are details in man tar and you find good tutorials about tar via the internet.
How to ship files as executable? (no git)
1,544,728,212,000
I have a small function in my .zshrc. It creates a command to search for and run an executable file, that may or may not exist, in the current directory, or a parent directory somewhere up the current path. I have the logic to find the executable, if it exists, however when I try to run it, I get the following error: Using /Users/username/some/directory/executable_file function_name:15: no such file or directory: /Users/username/some/directory/executable_file Generated by the following code: if [[ $current_path != / ]] then echo "Using $current_path/executable_file" "$current_path/executable_file $@" That path is however correct, as copy-and-pasting it, or running the function with sudo, works perfectly. I've tried running the function as current user with sudo -u in the script, but it still fails. How can I run the executable script, the same as if the user had typed it in manually, or at least without sudo and a password?
The quoting around this line: "$current_path/executable_file $@" tells the shell that there's one item to be found and executed -- whatever is between the double-quotes (after the various variable expansions). In the simplest case, with no parameters to your function, it will attempt to execute: "$current_path/executable_file " ... which is probably failing, even if the $current_path/executable_file file itself exists, as there's a trailing space. If you did happen to pass parameters, the likelihood is even lower that such a file exists, namely: "$current_path/executable_file arg1 arg2 arg3..." Rearrange the quotes so that you've protected the expansion of the $current_path variable, but allowed the executable's name to end when it should: "$current_path/executable_file" "$@"
Running an executable without sudo
1,544,728,212,000
I have Python code that is run using a bash script. I want non-sudo users to be able to run it without making the Python code readable. What is the recommended pattern? Two ways I considered: Put all code under user's HOME and make it non-readable and executable as necessary Put all code under /usr/local and add relevant bash scripts to sudoers Put all code under /root and add relevant bash scripts to user's PATH or bin folder As there are several ways to structure this, I'd love to hear what you think the standard or recommended way is.
One common way to make sure users can't read the source code of something they are causing to run would be to write a service which acts on the user's behalf with the necessary privileges. Then give users a way to communicate with the server, such as a socket or a TCP port. At this point the code is no longer running in a context available to the user. Writing this isn't trivial, since you might need to consider for example users trying to use your service for privilege escalation.
Where to store proprietary code and executable scripts? [closed]
1,544,728,212,000
I have a game executable at ~/Games/factorio/bin/x64/factorio that I want to run from dmenu. I've created the shortcut below: [Desktop Entry] Type=Application Name=Factorio Path=/home/[USERNAME]/Games/factorio/bin/x64 Exec=factorio Terminal=false ...with [USERNAME] obviously being my username. dmenu picks up the file and displays the entry, but when I select it, nothing happens. I created another desktop file for pavucontrol below: [Desktop Entry] Type=Application Name=pavucontrol Comment=Sound manager for PulseAudio Path=/usr/bin Exec=pavucontrol Terminal=false This desktop file (pavucontrol.desktop) has the exact same syntax as factorio.desktop, yet actually works. Is there something I'm missing? I've checked the file permissions for both factorio and factorio.desktop, and both have full read permissions and write permissions for the owner. Both are marked as executable. Here is some system information if that helps: OS: Antergos Linux x86_64 Model: NC839AA-ABA a6838f Kernel: 4.12.3-1-ARCH Shell: bash 4.4.12 DE: i3
Something that always worked for me was putting the whole path in the Exec section as follows: [Desktop Entry] Type=Application Name=Factorio Exec=/home/[USERNAME]/Games/factorio/bin/x64/factorio Terminal=false I don't know excactly what the Path section is for - I never used it.
.desktop file will not launch the desired program, despite being identical in syntax to a working file
1,544,728,212,000
I can't execute a simple executable. The result of ll user@user-SATELLITE-C855-169:~/Bureau/Workspace/buildroot/buildroot/output/host/opt/ext-toolchain/bin$ ll total 16948 drwxr-xr-x 2 user user 4096 avril 18 2014 ./ drwxr-xr-x 8 user user 4096 janv. 18 21:01 ../ -rwxr-xr-x 1 user user 565152 avril 18 2014 armv5-ctng-linux-gnueabi-addr2line* -rwxr-xr-x 2 user user 589764 avril 18 2014 armv5-ctng-linux-gnueabi-ar* -rwxr-xr-x 2 user user 1035780 avril 18 2014 armv5-ctng-linux-gnueabi-as* -rwxr-xr-x 2 user user 624784 avril 18 2014 armv5-ctng-linux-gnueabi-c++* lrwxrwxrwx 1 user user 28 avril 18 2014 armv5-ctng-linux-gnueabi-cc -> armv5-ctng-linux-gnueabi-gcc* -rwxr-xr-x 1 user user 563424 avril 18 2014 armv5-ctng-linux-gnueabi-c++filt* and this is how I execute armv5-ctng-linux-gnueabi-ar user@user-SATELLITE-C855-169:~/Bureau/Workspace/buildroot/buildroot/output/host/opt/ext-toolchain/bin$ ./armv5-ctng-linux-gnueabi-ar This gives No such file or folder What is meant by the * in the end of each file -- is there something special? EDIT Propsed manip by @Arkadiusz Drabczyk: user@user-SATELLITE-C855-169:~/Bureau/Workspace/buildroot/buildroot/output/host/opt/ext-toolchain/bin$ readelf -a armv5-ctng-linux-gnueabi-ar | grep "Requesting program interpreter:" [Requesting program interpreter: /lib/ld-linux.so.2] Propsed manip by @steeldriver: user@user-SATELLITE-C855-169:~/Bureau/Workspace/buildroot/buildroot/output/host/opt/ext-toolchain/bin$ arch x86_64 I am using a 64 bit OS. user@user-SATELLITE-C855-169:~/Bureau/Workspace/buildroot/buildroot/output/host/opt/ext-toolchain/bin$ file armv5-ctng-linux-gnueabi-ar armv5-ctng-linux-gnueabi-ar: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.18, BuildID[sha1]=8dac66869f5be2dbb2bee517e289901c4be80db5, stripped The binary seems to work with a 32 bit architecture ELF 32-bit.
Any help, what is meant by the * in the end of each file is there sth special? Your ll alias may contain -F option which adds a character after a file name. From man ls: -F, --classify append indicator (one of */=>@|) to entries In many shells such as bash you can check how an alias is expanded using type command. For example, on my system: $ type ll ll is aliased to 'ls -Alhtr --color' Now, you said that the file that gives you the error is a binary so it may be due to an incorrect loader. Try what loader it requests and make sure you have it: $ readelf -a armv5-ctng-linux-gnueabi-ar | grep "Requesting program interpreter:" If the binary is designated to run on x32 system it will request a x32 interpreter from /lib. If you don't have it will not start. So now, depending on the system you use you need to find a way to add 32-bit compatibility layer to your system. For example, on Ubuntu it's simple - just a single apt-get install will do the job, for Slackware it's described here: http://docs.slackware.com/slackware:multilib .
Executing a program gives No such file or folder error [duplicate]
1,544,728,212,000
I was reading the man page for the ancient a.out format (located here), trying to understand the evolution of Unix executable formats. I was wondering about something. The man page says that if the magic number is OMAGIC, then the text segment is not shared with other processes and no write-protection is placed on it, and the data segment begins immediately after it in memory (at the next byte). But if the magic number is NMAGIC or ZMAGIC, the text segment is write-protected and shared with other processes running the same program, and in this case the data segment begins at the beginning of the next 1024-byte block. Why is this so? Why does the sharing of the text segment necessitate the data segment beginning on a 1024-byte boundary? I have a feeling this is something that applies generally and is not specific to the a.out format.
That is done to align the data segment on a page boundary (which automatically forces it to be in a different page from the text segment). With that clue, you should be able find further information such as John Levine's explanation in Linkers and Loaders.
a.out - Data segment and text segment are contiguous iff text segment is not shared. Why is this so?
1,544,728,212,000
I want to know which executable gets executed for any command in bash. Example: I have firefox installed here /usr/bin/firefox, it is in the $PATH alias browser=firefox alias br=browser Now I want to type something like getexecutable "br" and it should display /usr/bin/firefox
Here's a quick script I wrote further to my comment, that in the SIMPLE case of aliases will work. For anything with arguments/etc., though, it will fail miserably. cmd="$1" type=aliased while [ "$type" = "aliased" ]; do output="$(type "$cmd")" type="$(cut -d ' ' -f 3 <<< "$output")" cmd="$(cut -d '`' -f 2 <<< "$output" | tr -d \')" done echo "$output" You will have to (ironically!) alias something to source this, as spawning a subshell will likely remove your local aliases.
Get executable for any command [duplicate]
1,544,728,212,000
I know this is to be done by creating a .contract file in /usr/share/contractor. For example, one like this will add a menu option to open a folder as root. [Contractor Entry] Name=Open folder as root Icon=gksu-root-terminal Description=Open folder as root MimeType=inode;application/x-sh;application/x-executable; Exec=gksudo pantheon-files -d %U Gettext-Domain=pantheon-files How to adjust such a contractor file for the 'make executable' option? What about a 'Run' option for the executable files?
sudo gedit /usr/share/contractor/make_executable.contract Add this content and save: [Contractor Entry] Name=Make executable Icon=name.of.icon.wanted Description=Make a file executable MimeType=inode;application/x-sh;application/x-executable; Exec=gksudo chmod +x %U Should do the trick. But it is possible that in elementaryOS a file that was made executable may still lack the option of being run from context menu or click: it may open instead in a text editor, etc. To add a 'Run' menu entry to run such a file create a new contractor entry sudo gedit /usr/share/contractor/run.contract like this: [Contractor Entry] Name=Run Icon=run Description=Run MimeType=inode;application/x-sh;application/x-executable; Exec=sh %U
How to add 'Make executable' and 'Run' entries to Elementary OS file manager context menu?
1,368,139,064,000
I'm trying to get a portable LaTeX installation running on a Gentoo Server. The LaTeX files are already installed. When I try to run ./pdflatex in the path/to/texlive/bin/x86_64-linux/ I get the message exec format error: ./pdflatex. I'm running the command line with SSH with zsh. Google told me that this could mean that I am using the wrong executable. But when I run uname -m I get x86_64 so I thought /x86_64-linux/ contains the correct binaries. Additionally I have tried all the other LaTeX bins for linux (i386-linux, armel-linux, armhf-linux, aarch64-linux) but none of them was working. When get the contents with dir I can also see that there is a pdflatex file (link). Also the ls -l tells me that the file (which the pdflatex links to) has read and execute permissions for all users. Additionally I have tried all the other LaTeX bins for linux (i386-linux, armel-linux, armhf-linux, aarch64-linux) but none of them was working. Note: I do not have root rights so I have to use the protable installation. How can I run the pdflatex command?
So I found the solution for my problem. I'm not sure if this will help anybody else. But just for completeness I post the answer here. Even though uname -a returns x86_64 I got told that I should use objdump -a /bin/ls. This returns file format elf32-i386, so the correct binaries for me are the i386-linux binaries. As I wrote I tested them and they did not work. The problem was that I moved the (pre-installed) files to my server using FTP with FileZilla. FileZilla offers a transfer type. This is set to automatically by default. Setting this to binary explicitly and moving the files to the server again did the trick. Now it works.
Run portable LaTeX on Gentoo
1,368,139,064,000
Im trying to make a desktop file to start Strife but it doesnt work as it should. For the current Path im using this here: '"$(dirname "$1")"' And to run the executable I use this command: '"$(dirname "$1")"/Strife/bin/strife'
I would be very surprised if a .desktop file is handled by your shell. You'd be better off hardcoding the full path in the Exec directive. I found the GNOME Desktop Entry Specification which says: The Exec key must contain a command line. A command line consists of an executable program optionally followed by one or more arguments. The executable program can either be specified with its full path or with the name of the executable only. If no full path is provided the executable is looked up in the $PATH environment variable used by the desktop environment.
Gnomes .desktop current path
1,368,139,064,000
It is said that in linux, unlike Windows, there's not a clear border between executables and other files. Well, in Windows, I write a C++ program, then it is precompiled, compiled and then linked to become a distinguished file: executable. The changes are so much that they are not reversible. But in Linux a simple text file containing a code is executed. So what do compilation and linkage do? If a code is executed, why is it compiled? What's good in this process and what is the main difference between the code and the final (so called) executable file in Linux? Why portability of programs is limited in Linux and needs a lot of (version specific) dependency requirements, if they are codes?
tl;dr: the difference is the executable bit. The answer lies in the UNIX permissions model. To be honest, I forget what the Windows permissions model is, but in UNIX (and hence GNU/Linux), there are three main permission bits that can be set on a file: read, write, and execute. These bits can be set on anything. There are two main types of files that you would want to set the executable bit on: Binaries Scripts The first type works exactly as .exes do in Windows. The only difference is that the ability of the file to be executed is determined by a permission bit in the filesystem, instead of the file extension. Binaries do still have a format, just like .exes. On GNU/Linux, this format is called ELF. The Linux kernel has special logic that tells it how to read the format of ELF binaries. When you execute a binary, it is this logic that actually runs the code. The part that is confusing you is the second type of executable: scripts. Scripts are regular text files that can be executed by an interpreter, like python or bash. Scripts start with something called a shebang, which looks like this: #!. When a script is "executed", the kernel recognizes the shebang and executes whatever binary is specified after it, with the path of the script you are executing as an argument. For example, let's say I have a script with the executable bit set, at the path /home/alex/bin/test_script. This script has the following as the first line: #!/bin/bash When you execute this script, the kernel will recognize the shebang at the beginning. It will then load /bin/bash and pass it /home/alex/bin/test_script as the first argument. This would be the equivalent of executing the following on the command line: /bin/bash /home/alex/bin/test_script In this way, bash is loaded to interpret, or "execute", the script. As a small aside, the change from source to binary is not so great that it cannot be reversed. Retrieving source code from a binary is called decompiling.
On executable files in Linux [closed]
1,368,139,064,000
Looking at the files in my /etc/profile.d directory: cwellsx@DESKTOP-R6KRF36:/etc/profile.d$ ls -l total 32 -rw-r--r-- 1 root root 96 Aug 20 2018 01-locale-fix.sh -rw-r--r-- 1 root root 1557 Dec 4 2017 Z97-byobu.sh -rwxr-xr-x 1 root root 3417 Mar 11 22:07 Z99-cloud-locale-test.sh -rwxr-xr-x 1 root root 873 Mar 11 22:07 Z99-cloudinit-warnings.sh -rw-r--r-- 1 root root 825 Mar 21 10:55 apps-bin-path.sh -rw-r--r-- 1 root root 664 Apr 2 2018 bash_completion.sh -rw-r--r-- 1 root root 1003 Dec 29 2015 cedilla-portuguese.sh -rw-r--r-- 1 root root 2207 Aug 27 12:25 oraclejdk.sh This is Ubuntu on the "Windows Subsystem for Linux (WSL)". Anyway the content of oraclejdk.sh is like this: export J2SDKDIR=/usr/lib/jvm/oracle_jdk8 export J2REDIR=/usr/lib/jvm/oracle_jdk8/jre export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/mnt/c/Program Files/WindowsApps/CanonicalGroupLimited.Ubuntu18.04onWindows_1804.2019.522.0_x64__79rhkp1fndgsc:/snap/bin:/usr/lib/jvm/oracle_jdk8/bin:/usr/lib/jvm/oracle_jdk8/db/bin:/usr/lib/jvm/oracle_jdk8/jre/bin export JAVA_HOME=/usr/lib/jvm/oracle_jdk8 export DERBY_HOME=/usr/lib/jvm/oracle_jdk8/db I'm pretty sure it's run when the bash shell starts. My question is, why don't all the *sh files have the x permission bit set? Don't all shell scripts need the x perission bit set in order to be executable? Please consider me a bit of a novice.
A shell script only needs to be executable if it is to be run as ./scriptname If it is executable, and if it has a valid #!-line pointing to the correct interpreter, then that interpreter (e.g. bash) will be used to run the script. If the script is not executable (but still readable), then it may still be run with an explicit interpreter from the command line, as for example in bash ./scriptname (if it's a bash script). Note that you would have to know what interpreter to use here as a zsh script might not execute correctly if run with bash, and a bash script likewise would possibly break if executed with sh (just as a Perl script would not work correctly if executed by Python or Ruby). Some script, as the one you show, are not actually scripts but "dot-scripts". These are designed to be sourced, like . ./scriptname i.e. used as an argument to the dot (.) utility, or (in bash), source ./scriptname (the two are equivalent in bash, but the dot utility is more portable) This would run the commands in the dot-script in the same environment as the invoking shell, which would be necessary for e.g. setting environment variables in the current environment. (Scripts that are run as ordinary are run in a child environment, a copy of its parent's environment, and can't set environment variables in, or change the current directory of, their parent shells.) A dot-script is read by (or "sourced by") the current shell, and therefore do not have to be executable, only readable. I can tell that the script that you show the contents of is a dot-script since it does not have a #!-line (it does not need one) and since it just exports a bunch of variables. I believe I picked up the term "dot-script" from the manual for the ksh93 shell. I can't find a more authoritative source for it, but sounds like a good word to use to describe a script that is supposed to be sourced using the . command.
Must shell scripts be executable?
1,368,139,064,000
I source a bashscript (Child) inside another bashscript (Parent), somewhere in the middle of Parent. The argument passed to Parent when executing it gets passed to the Child. How can I prevent this behavior? I don't want the arguments of parent to be transferred to Child as well. NOTE: Parent Child analogy is not about Parent Child process, but about something that comes before the other. Also, I wanted to have the environment of the Parent Script (except the arguments those are passed to it) for the sourced script. This is because, the default values for argument to the scripts are different. See Example Below. The parent script uses some of the functions defined in the sourced script and also the sourced script creates arrays that is being used by the parent script. Also, I need the positional parameters of the parent script, after the Child script is sourced. #!/usr/bin/env bash # - goal: "Parent" main() { # # Path #dScriptP="$(dirname "$(readlink -f "${BASH_SOURCE[0]}")")" # # Argument ParentArgument=${1:-40} echo "ParentArgument=${ParentArgument}" . Child.sh } main "$@" #!/bin/false # shellcheck shell=bash # - goal: "Child" main() { # # Path #dScriptP="$(dirname "$(readlink -f "${BASH_SOURCE[0]}")")" # # Argument ChildArgument=${1:-30} echo "ChildArgument=${ChildArgument}" } main "$@" $ ./Parent.sh 50 ParentArgument=50 ChildArgument=50 Desired Output $ ./Parent.sh 50 ParentArgument=50 ChildArgument=30
POSIX description of the . "utility" (Bash's source is a synonym) goes: NAME: dot - execute commands in the current environment SYNOPSIS: . file DESCRIPTION: The shell shall execute commands from the file in the current environment. And the execution environment is defined to include: Shell parameters that are set by variable assignment (see the set special built-in) or from [the environment] That "set by variable assignment" doesn't really seem to match how the positional parameters (arguments) are initially assigned at shell startup, but the reference to set seems to imply that they should be included. And in any case, all shells I could find include them. So, changing shells isn't likely to work, but you have some options: Just unset the arguments, with set --. But then they wouldn't be available in the main shell after that either. In Bash/ksh/zsh, you could save them in an array first, args=("$@"), but of course that array would be visible to the sourced script. Run the ./source in a function, since functions have their own set of arguments. Something like source() { . "$1"; } and then source script.sh, though that would make the name of sourced file visible in $1. Though that can be worked around, in Bash you could use source() { local f=$1; shift; . "$f"; };. In Bash/ksh/zsh, you could add your own arguments to ./source, in which case only they would be available to the sourced script. After . script.sh foo the sourced script would only see foo in $1. But you can't pass an empty list of arguments that way. Then again, if you don't want the other script to see the environment of the main script, then don't source it, but run it as a command instead, passing any required data explicitly through the scripts arguments, stdin and stdout.
Prevent bashscript argument being transferred to a child sourced script
1,368,139,064,000
I have the Ecplise Platform (the programming environment, see https://eclipse.org/) on my system. It can be run by typing "eclipse" into the terminal. Now I installed eclipse prolog (see http://www.eclipseclp.org/ ). I followed the instructions from http://eclipseclp.org/Distribution/Current/6.1_224_x86_64_linux/Readme.txt ) and now I want to start it. In these instructions they say that it can be run by typing "eclipse" into the terminal. But if I do that, only the Eclipse programming environment starts, not the eclipse prolog thingy. What do I do now? I am using Linux Mint 17, 64 bit.
Figure out where the new eclipse is installed, and don't just enter eclipse but the full path: /where/the/new/eclipse/is/installed/bin/eclipse If this new eclipse becomes your first choice, you may want to define an alias in your startup files (e.g. .profile for sh): alias eclipse=/where/the/new/eclipse/is/installed/bin/eclipse Now, if you enter eclipse, the new one will be run. To execute the old one, you will have to specify its full path. You can even define two aliases, one for each eclipse: alias eprolog=/where/the/new/eclipse/is/installed/bin/eclipse alias eplatform=/where/the/old/eclipse/is/installed/bin/eclipse ... and enter either eprolog or eplatform at the shell prompt.
How to run a program via terminal if it shares its name with another program
1,368,139,064,000
Outside of a shell, such as running some other process, what does the term current directory mean, and is it possible to execute a binary without specifying the full path or the preceding './', assuming that the current directory contains the executable?
Yes, it is possible by setting up the search path appropriately (either containing your working directory explicitly or by containing "./"), but it is good practice to have the "./" in front of the program name. The reason is security: A malware could write an executable file with the name of a commonly used program (say, ls) and the next call to ls will execute the local copy instead of /bin/ls. Therefore, standard PATH settings under UNIX do not contain "./".
Is it possible to execute a binary without specifying the full path or the preceding './'?
1,368,139,064,000
The environment variable PATH is the search path for executable commands, so I thought changing the PATH to something that doesn't exist (for instance, export PATH=blah) would make me unable to use any command. After I change it, it doesn't let me use all commands (e.g. I can't use ls). But apparently, I can still use numerous commands, and I can still use export and change it back. Aren't all commands just executable files in the search path? Where are the executable files for these still-usable commands located? How come I can still use them when my search path is gibberish?
You can still run builtin commands, i.e. commands internal to your shell thus not needing to be backed by an executable. For example, if your shell is bash, you can have a look to: https://www.gnu.org/software/bash/manual/html_node/Shell-Builtin-Commands.html Note that some commands that internally affect the shell like cd, exec and exit for example can't be provided by an external binary because they just wouldn't work as expected (at all even).
How come I can change my PATH to gibberish and still use commands?
1,368,139,064,000
I have a directory like this: /home/user/Project/ Inside Project and after many sub directories, I have a script names ninja that needs to be run with ./ninja. In other words: /home/user/Project/sub1/sub2/sub3/ninja Of course, I can just cd into Project and execute ./ninja. However, I am writing an alias to run that command via bashrc. alias runNinja = cd ~/Project/sub1/sub2/sub3 && ./ninja Can we do it 1 command? alias runNinja = .~/Project/sub1/sub2/sub3/ninja The above obviously doesnt work, or even this alias runNinja = ./home/user/Project/sub1/sub2/sub3/ninja TL,DR: How to shorten alias to run script in directories?
Just append the path in the $PATH Variable in .bashrc like below: export PATH=$PATH:/home/user/Project/sub1/sub2/sub3 And execute it from wherever you want in even without ./ $ ninja But ofcourse you can set alias also alias runNinja='/home/user/Project/sub1/sub2/sub3/ninja' And execute from wherever: $ runNinja If you deliberately wanted to be in that directory when running this ( Ex:if you are processing any file as input/output from that directory or dependency) , you have to write a function like below in your ~/.bashrc file or profile: runNinja() { cd /home/user/Project/sub1/sub2/sub3 && ./ninja "$@" }
alias to run dot slash (./) on a file without cd using bash
1,368,139,064,000
I used the following way to execute a collection of Bash script files, in the current Bash session: source ~/myScripts/{assignments.sh,nginx_conf.sh,php_conf.sh,drush_install.sh} It feels to me uncomfortable to maintain. Some vertical collection is better. Pseudocode: assignments.sh nginx_conf.sh php_conf.sh drush_install.sh How would you do that vertically? By the way I'm not sure that a source here-document like this one is the best way. Update I now understand that my one line source operation was doomed to fail because as of Bash 4.3.48(1), the Bash interpreter evaluates source in such a way that it could only work with one file, and any other file beyond it, will be evaluated as an argument for the first file (a brace set {} wouldn't help with this). I get the impression this is the same as with bash instead source.
A heredoc places each filename on its own line without anything else (though assumes that the filenames do not contain anything crazy like a newline) and allows for a specific ordering of the filenames: while read f; do source ~/myScripts/"$f" done <<SRC_LIST assignments.sh nginx_conf.sh php_conf.sh drush_install.sh so_forth.sh SRC_LIST this also avoids the problem of source file [arguments] where the subsequent filenames would be treated as arguments to assignments.sh (unless you did mean the subsequent to be arguments??). The list would need to be manually kept up to date with what is on the filesystem. Another option would be to skip the tedium of listing the files and glob them in; this assumes that all the matching files in the directory can and should be sourced in (so no mixing in other random *.sh files that must not be sourced). However this is complicated by the edge case of when no files are matched by the glob, in which case bash will by default pass the literal filename of ~/myScripts/*.sh in to be sourced, so that must be worked around (temporarily, if necessary) and nothing sourced if there are no matches: REVERT=$(shopt -p nullglob) shopt -s nullglob for f in ~/myScripts/*.sh; do source "$f" done $REVERT with this method the filenames would need to be named in a way that the glob matches them in a correct order if there is an order the files need to be sourced in. (In ZSH one would not need the shopt calls as instead for f in ~/myScripts/*.sh(N); do would suffice to perform a null glob. Other shells will vary in how they handle globs and what to do when nothing matches.)
Set two or more files for execution (via source) in a comfortable way to read
1,368,139,064,000
When a daemon is executed, is the executable copied to memory? If so, can it be copied encrypted? If not, is there a way to prevent the executable from being copied to memory? The executable is stored on an encrypted tmpfs.
When a program is executed, the necessary code pages are loaded into memory on demand. This is transparent: the kernel loads the pages when it needs them, and tries to be smart by preloading pages that are likely to be needed soon. The code has to be decrypted before it can be executed. If the code is stored on an encrypted filesystem, it is decrypted inside the filesystem driver stack, just like any other piece of data stored in a file. It is pointless to encrypt a RAM filesystem. The key exists on the live system anyway (to decrypt the file). A subject can access the files if and only if he can access the key, so you need to do access control on the key. You might as well cut the middleman and control access to the files. Access control on a live system relies on permissions. Cryptography is not involved. If you don't want certain users to access a particular file, change the file's permissions accordingly. If someone has physical access to the machine, they have all the permissions they want. No amount of cryptography can change that. Cryptography protects access to offline data, which is stored separately from the key.
are daemon tmpfs executables copied unencrypted to memory upon execution? (prevent if so?)
1,368,139,064,000
So I just bought a new Samsung T7 Portable SSD. I initially intended to format it to exFAT, for use with both Windows, MacOS and Linux, but upon inspection, the disk comes with a default file system of HPFS/NTFS/exFAT. I didn't know that was a thing, but I decided to test it out. To test it out, I simply copied a few ASCII text files to the disk, but regardless of method for copying, and file extension, they all get the executable flag set. I don't understand why. Why is it like this, and how can I avoid it? I want the files copied exactly as they are. Complete output showing changed permissions. user@ubuntu:~$ echo "test text file" > test.txt user@ubuntu:~$ echo "test test test" > test user@ubuntu:~$ echo "print('test')" > test.py user@ubuntu:~$ user@ubuntu:~$ ls -l test* -rw-rw-r-- 1 user user 15 July 18 01:20 test -rw-rw-r-- 1 user user 14 July 18 01:20 test.py -rw-rw-r-- 1 user user 15 July 18 01:20 test.txt user@ubuntu:~$ user@ubuntu:~$ mkdir /media/user/T7/testdir user@ubuntu:~$ cp test /media/user/T7/testdir/ user@ubuntu:~$ rsync test.txt /media/user/T7/testdir/ user@ubuntu:~$ rsync -a test.py /media/user/T7/testdir/ user@ubuntu:~$ user@ubuntu:~$ ls -l /media/user/T7/testdir total 384 -rwxr-xr-x 1 user user 15 July 18 01:23 test -rwxr-xr-x 1 user user 14 July 18 01:20 test.py -rwxr-xr-x 1 user user 15 July 18 01:23 test.txt Here you can see I've tried both cp, rsync and rsync -a, but they end up as executables every single time. Why? Edit: I tried doing exactly the same to a WD HDD that comes with NTFS by default. There, the files get the 777 permission (rwxrwxrwx). Does it have something to do with the disk itself? Clearly my knowledge is lacking here.
HPFS/NTFS/exFAT is a partition type. It claims the partition contains one of the named filesystem types, but that does not have to be the complete truth. Try lsblk -o +FSTYPE or look into /proc/mounts while the partition is mounted to see the actual filesystem type. Anyway, HPFS is unlikely, so the SSD most likely is already formatted with either a NTFS or exFAT filesystem. In terms of use with Linux, both these filesystem types lack a certain property: they don't support Unix-style ownership/group/permissions information. NTFS has ACLs which could be used to implement Unix-style ownerships and permissions; it could even support Linux's ACLs if necessary. But before it can do that, the Linux NTFS driver needs a conversion table between Unix style user and group IDs (UIDs and GIDs, basically just simple numbers) and Windows-style security IDs (SIDs: long strings of groups of numbers separated by dashes). If this is not provided, the driver won't be able to know how it should record the file permissions information on the filesystem, and it falls back to working just like with a filesystem that cannot support the concept of users and permissions at all. exFAT is a filesystem designed for removable media: it is assumed that whover physically possesses the media will be able to read everything stored on it anyway, so there is not much point for permissions. So like FAT32 and other filesystems in the FAT family, it has no real concept of file ownerships and permissions at all, and no way to store them. But Linux - or any Unix-like system - fundamentally requires that every file must be associated with some user and some group, and must have at least the classic set of user/group/other permissions, or a more complex ACL. All the system calls and operating system commands expect every file to have those. So if the filesystem does not support those, the filesystem driver needs to fake them. For the purpose of providing fake ownerships and permissions when the filesystem has none, both the NTFS-3G and exFAT filesystem drivers support a set of mount options which you can use to define two sets of permissions: one for all files, and another for all directories. Without being able to store permissions information to the metadata of each file on the filesystem, that's all you can get. The difference between the WD NTFS HDD and the Samsung SSD indicates that the Samsung most likely already has an exFAT filesystem on it, and the exFAT and NTFS filesystems simply have different default settings for faking the permissions... or the NTFS HDD has an ACL on its root directory that would be expressed in Windows as Everyone - Full Control, configured to be inherited by any new file or sub-directory. Since "Everyone" in Windows is a globally-defined standard SID, it's one of the very few SIDs the Linux NTFS driver will be able to understand by default.
Files get executable when copied
1,368,139,064,000
Note that the FS is mounted with the relatime option (thus the recent last access time is not shown). Is there a way to see when an executable (ELF 64-bit LSB executable) was run for the last time? Using root access. The idea is that a complex production application can run the same exec that currently exists at two different places (pending installation). I just want to ensure that only one of them is run from now on, without interrupting anything (cannot temporarily remove one for instance ; it's 99% sure only one of them is used, but need to be 100% sure).
As far as I recall, relatime updates the access time every 24 h or if the old atime was earlier than or equal to mtime. That would let you know if the program has been unused for a full day; or, if you don't care to wait, run touch programfile and then see if the atime has changed later. Plain touch would set atime == mtime, but atime gets updated if it's <= mtime, so that's ok. You could also do something like touch -a -d 1999-12-31 instead to change atime without modifying mtime. Like so: $ cp /bin/ls . $ touch ./ls $ stat ./ls ... Access: 2022-01-07 13:07:16.640132600 +0200 Modify: 2022-01-07 13:07:16.640132600 +0200 Change: 2022-01-07 13:07:16.640132600 +0200 Birth: - $ ./ls > /dev/null $ stat ./ls ... Access: 2022-01-07 13:07:57.175525517 +0200 Modify: 2022-01-07 13:07:16.640132600 +0200 Change: 2022-01-07 13:07:16.640132600 +0200 Birth: - The access timestamp was updated when the program was run. Of course, atime would also change if the file is just read. It's probably not that common for binary files to be read just like that, but e.g. a backup tool could do just that. That would make atime useless for this. But if atime doesn't change, then the file has been neither read nor executed, so it can be used to prove the negative case.
Is it possible to show when an executable was run for the last time?
1,368,139,064,000
I have an external hard-drive formatted as NTFS which I use to back-up and store files from both Linux and Windows (as I am dual-booting). I recently bought a new computer and installed Linux Mint 20 on it, and I would like to copy some of the files from my back-up to my computer's internal HD. I noticed that every single file in every single subfolder I copied from the hard-drive has had the option Allow executing file as program enabled in its permissions. How can I safely recursively run through a directory and set all files in all subdirectories as non-executable (including hidden ones and in hidden folders starting with .)? Also, is there a way of preventing this to happen in a NTFS hard-drive or would I be better off creating two partitions on it, an EXT4 for Linux and a NTFS for Windows?
The answer is to use: chmod -R -x+X . See chmod(1)
Recursively setting all files in a directory as non-executable [duplicate]
1,368,139,064,000
What is the difference between the following ./executable and executable. Why is it, sometimes, some executable (non-Linux commands) don't require ./? If I have installed an executable through a makefile (a physics code) how can I remove it and install an updated version? Is removing rm the code sufficient? The executable, in this case, is executed without ./
In an UNIX environment (and even in other systems like DOS, Windows, etc) there are directories where the shell looks for executables. In an Unix environment it's defined in the PATH variable. You can see the directories in the PATH variable executing the following command: $ echo $PATH The result will be something like: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin As you can see, the variable is a list of directories separated by a colon. When you run a command, e.g. ls, the system will search for an executable in the first directory of the list (in the example, /usr/local/sbin). If it doesn't find a file named ls there, it will try the next directory, until it finds it. So if your ls command is located on /usr/bin, it will execute. Or, you'll get a command not found error if the shell cannot find it anywhere. However, there are other ways to call an executable. Imagine you have two programs named ls in two directories in the PATH, and you want to run the second one. The way of doing that might be running /usr/bin/ls, so you specify which one you want. The . is a shortcut for the current directory. So if you're at /home/user, ./configure is a shortcut for /home/user/configure. You can remove a file from the PATH by looking for the place it's located and removing it. However, you might prefer to manage binaries installed into your system through a package manager, available in most modern distributions (like rpm, dpkg, pacman, etc). If the Makefile creates several executables, it's going to be easier to remove them this way (also, the makefile might create some library files and several other things, that's why it's easier to use a package management tool). Sometimes a Makefile might bring an uninstall routine (i.e. make uninstall), but I'm not sure how often it happens. If you are updating a program through a new makefile, a new make install would likely replace the old binaries, but there's no guarantee of that. You can always find out what is the executable for a certain command by running which. For instance, if you want to know where ls is: $ which ls /usr/bin/ls
What is the difference between executable that requires ```./``` with the one not requiring it?
1,368,139,064,000
This is on a Mac but I figure it's a Unixy issue. I just forked a Github repo (this one) and cloned it to a USB stick (the one that came with the device for which the repo was made). Upon lsing I notice that README.md sticks out in red. Sure enough, its permissions are: -rwxrwxrwx 1 me staff 133B 15 Jun 08:59 README.md* I try running chmod 644 README.md but there's no change. What's going on here?
Because the 'executability' of a file is a property of the file entry on UNIX systems, not of the file type like it is on Windows. In short, ls will list a file as being executable if any of the owner, group, or everyone has execute permissions for the file. It doesn't care what the file type is, just what the permissions are. This behavior gives two significant benefits: You don't have to do anything special to handle new executable formats. This is particularly useful for scripting languages, where you can just embed the interpreter with a #! line at the top of the file. The kernel doesn't have to know that .py files are executable, because the permissions tell it this. This also, when combined with binfmt_misc support on Linux, makes it possible to do really neat things such as treating Windows console programs like native binaries if you have Wine installed. It lets you say that certain files that are technically machine code can't or shouldn't be executed. This is also mostly used with scripting languages, where it's not unusual to have libraries that are indistinguishable in terms of file format form executables. So, using the python example above, it lets you say that people shouldnt' be able to run arbitrary modules form the Python standard library directly, even though they have a .py extension. However, this all kind of falls apart if you're stuck dealing with filesystems that don't support POSIX permissions, such as FAT (or NTFS if you don't have user-mappings set up). If the filesystem doesn't store POSIX permissions, then the OS has to simulate them. On Linux the default is to have read write and execute permissions set for everyone, so that users can just do what they want with the files. Without this, you wouldn't be able to execute scripts or binaries off of a USB flash drive, because the kernel doesn't let you modify permissions on such filesystems per-file. In your particular case, git stores the attributes it sees on the files when they are committed, and the original commit of the README.md file (or one of the subsequent commits to it) probably happened on a Windows system, where such things are handled very differently, and thus git just stores the permissions as full access for everyone, similarly to how Linux handles fiesystems without permissions support.
Why would README.md show up as an executable?
1,368,139,064,000
I have such a program to check methods of data from the command line: me at me in ~/Desktop/Coding/codes $ cat check_methods.py #! /usr/bin/env python from sys import argv methods = dir(eval(argv[1])) methods = [i for i in methods if not i.startswith('_')] print(methods) me at me in ~/Desktop/Coding/codes $ python check_methods.py list ['append', 'clear', 'copy', 'count', 'extend', 'index', 'insert', 'pop', 'remove', 'reverse', 'sort'] me at me in ~/Desktop/Coding/codes $ python check_methods.py dict ['clear', 'copy', 'fromkeys', 'get', 'items', 'keys', 'pop', 'popitem', 'setdefault', 'update', 'values'] I'd like to run the program directly from bash, like: $ check_methods.py list -bash: check_methods.py: command not found How to achieve it ?
Specify the path to the script, since it isn't in $PATH. ./check_methods.py list And never add . to $PATH.
Run python script without declare it interpreter
1,368,139,064,000
I want to run an executable main and redirect all outputs to /dev/null, meanwhile I measure its runtime with time and write the results to runtime.out. Since the task is long I also have to run the whole thing with nohup. I tried the following: nohup time ./main &> /dev/null &> runtime.out & This just outputs everything to runtime.out. I don't need the output of main just the runtime, saved into a file.
time has something made to be used for this: nohup time -o runtime.out ./main &> /dev/null & If it was scripted and didn't require a tty, I'd rather use setsid than nohup + &, because it "daemonizes" better, and can still be sent a HUP signal if needed. setsid time -o runtime.out ./main </dev/null &>/dev/null Also note that here (as in OP's question) time is /usr/bin/time, which has a different output format than bash's builtin time command. It appears that /usr/bin/time --portability gives a similar output if needed.
Using nohup and time with different outputs