date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,501,174,611,000
In the curl command we can delete,rename and move files from a FTP server using Curl like in the command below: curl -v -u username:pwd ftp://host/FileTodelete -Q "DELE FileTodelete" Can we untar or unzip files like this way ? I mean that instead of DELE FileTodelete we put a untar/unzip command to extract a file in a remote server? thanks
No, in general this isn't possible. A FTP server usually has commands to get information about files and directories and to store, retrieve, delete and rename files. Commands to mount devices and to send messages to users are also standardized but not implemented in current servers. See the list of FTP commands on Wikipedia for details. No RFC mentions a command to extract files from an archive file. It may be possible that some servers implement unzipping via the SITE command or a proprietary command, but in general you either need to extract the files on your local machine and send them uncompressed or use another protocol like SSH to execute unzip/untar.
unzip/untar files using Curl in ftp server
1,501,174,611,000
RFC 1123 says Implementors MUST NOT assume any correspondence between READ boundaries on the control connection and the Telnet EOL sequences (CR LF). DISCUSSION: Thus, a server-FTP (or User-FTP) must continue reading characters from the control connection until a complete Telnet EOL sequence is encountered, before processing the command (or response, respectively). Conversely, a single READ from the control connection may include more than one FTP command. I'm wondering if this requirement is still relevant today. I toyed a little with Linux Netkit's telnet command (the one shipped with Ubuntu). When I open communication to a telnet server, the client is very paranoid about flushing data. telnet immediately turns key presses into packets and fetches them. Given that FTP stands on top of Telnet, if I telnet to an FTP server, I'd expect the same paranoid behaviour. Instead, telnet buffers my key presses until the newline. It fetches only whole FTP command packets. Only when talking to FTP servers. Why does telnet bother to buffer characters for FTP specifically given the requirement above exists? It even buffers past the MTU limit (and segments in the TCP layer). man telnet is silent on this topic and I can't find Netkit's source code. (One of the links is broken and the other one leads to an empty repository.) In any case, man telnet says "The source code is not comprehensible", so I'm pessimistic about finding a comment somewhere that would explain the rationale. Is there some sort of update specification or practical constraint that nulls an implementation's freedom to fetch half-finished FTP commands? What am I missing? Why does telnet buffer FTP control connection lines?
Telnet has two basic modes of operation: line mode and character mode. The default mode is always line mode. The telnet application then has to negotiate with the remote server about what it can and cannot support. This is done through a system called TELOPT and the special character 255 known as IAC - Interpret As Command. Only when the two ends have agreed that they can do character at once, and even that the remote end will echo back received characters, will the link be in the traditional interactive (non line buffered) mode. Since FTP is not meant to be manually interacted with using telnet it doesn't support any of the advanced telnet TELOPT features. Therefore character mode can never be negotiated. All this is in rfc854
Why does `telnet` buffer FTP control connection lines?
1,501,174,611,000
I am setting up my Ubuntu 14.04 server. I want to have an FTP access (I use vsftpd), mainly for my apache2 server, so I added a new user to the www-data group. I also set up the settings for vsftpd.conf, only allowing local users, and chroot_local_user=YES. I changed the home directory of my new user to /var/www/home. The permission are 755 (local_umask=022). I can now create files, download them in the /var/www/home/ directory and can not change to the root directory / on purpose for security reasons. But which strategy can I now use, when I want to change some conf files, download log files and so on, when I only allow the users to stay in their home directory? Which other security methods are highly recommend, to make my ftp server secure?
This is not how I would do this, but answering your question anyway: From man vsftpd.conf: chroot_list_enable If activated, you may provide a list of local users who are placed in a chroot() jail in their home directory upon login. The meaning is slightly different if chroot_local_user is set to YES. In this case, the list becomes a list of users which are NOT to be placed in a chroot() jail. By default, the file con‐ taining this list is /etc/vsftpd.chroot_list, but you may over‐ ride this with the chroot_list_file setting. Default: NO so: chroot_list_enable=YES chroot_list_file=/etc/vsftpd.chroot_list and add any user you don't want to be chrooted to that list so you can "change some conf files, download log files and so on".
ftp user for root directories? [closed]
1,501,174,611,000
I want to change my ProFTPD server port from 21 to 1945. But I didn't find any port mention in the proftpd.conf file. When I add port 21 in the conf file and restart the ftp server it shows the following error: Starting proftpd (via systemctl): Job failed. See system logs and 'systemctl status' for details. [FAILED] Please help me with finding out how I change my ftp server port. Below is the proftpd.conf file: # This is the ProFTPD configuration file # # See: http://www.proftpd.org/docs/directives/linked/by-name.html # Security-Enhanced Linux (SELinux) Notes: # # In Fedora and Red Hat Enterprise Linux, ProFTPD runs confined by SELinux # in order to mitigate the effects of an attacker taking advantage of an # unpatched vulnerability and getting control of the ftp server. By default, # ProFTPD cannot read or write most files on a system nor connect to many # external network services, but these restrictions can be relaxed by # setting SELinux booleans as follows: # # setsebool -P allow_ftpd_anon_write=1 # This allows the ftp daemon to write to files and directories labelled # with the public_content_rw_t context type; the daemon would only have # read access to these files normally. Files to be made available by ftp # but not writeable should be labelled public_content_t. # # setsebool -P allow_ftpd_full_access=1 # This allows the ftp daemon to read and write all files on the system. # # setsebool -P allow_ftpd_use_cifs=1 # This allows the ftp daemon to read and write files on CIFS-mounted # filesystems. # # setsebool -P allow_ftpd_use_nfs=1 # This allows the ftp daemon to read and write files on NFS-mounted # filesystems. # # setsebool -P ftp_home_dir=1 # This allows the ftp daemon to read and write files in users' home # directories. # # setsebool -P ftpd_connect_all_unreserved=1 # This setting is only available from Fedora 16/RHEL-7 onwards, and is # necessary for active-mode ftp transfers to work reliably with non-Linux # clients (see http://bugzilla.redhat.com/782177), which may choose to # use port numbers outside the "ephemeral port" range of 32768-61000. # # setsebool -P ftpd_connect_db=1 # This setting allows the ftp daemon to connect to commonly-used database # ports over the network, which is necessary if you are using a database # back-end for user authentication, etc. # # setsebool -P ftpd_is_daemon=1 # This setting is available only in Fedora releases 4 to 6 and Red Hat # Enterprise Linux 5. It should be set if ProFTPD is running in standalone # mode, and unset if running in inetd mode. # # setsebool -P ftpd_disable_trans=1 # This setting is available only in Fedora releases 4 to 6 and Red Hat # Enterprise Linux 5, and when set it removes the SELinux confinement of the # ftp daemon. Needless to say, its use is not recommended. # # All of these booleans are unset by default. # # See also the "ftpd_selinux" manpage. # # Note that the "-P" option to setsebool makes the setting permanent, i.e. # it will still be in effect after a reboot; without the "-P" option, the # effect only lasts until the next reboot. # # Restrictions imposed by SELinux are on top of those imposed by ordinary # file ownership and access permissions; in normal operation, the ftp daemon # will not be able to read and/or write a file unless *all* of the ownership, # permission and SELinux restrictions allow it. # Server Config - config used for anything outside a <VirtualHost> or <Global> context # See: http://www.proftpd.org/docs/howto/Vhost.html # Trace logging, disabled by default for performance reasons # (http://www.proftpd.org/docs/howto/Tracing.html) #TraceLog /var/log/proftpd/trace.log #Trace DEFAULT:0 ServerName "ProFTPD server" ServerIdent on "FTP Server ready." ServerAdmin root@localhost DefaultServer on # Cause every FTP user except adm to be chrooted into their home directory DefaultRoot ~ !adm # Use pam to authenticate (default) and be authoritative AuthPAMConfig proftpd AuthOrder mod_auth_pam.c* mod_auth_unix.c # If you use NIS/YP/LDAP you may need to disable PersistentPasswd #PersistentPasswd off # Don't do reverse DNS lookups (hangs on DNS problems) UseReverseDNS off # Set the user and group that the server runs as User nobody Group nobody # To prevent DoS attacks, set the maximum number of child processes # to 20. If you need to allow more than 20 concurrent connections # at once, simply increase this value. Note that this ONLY works # in standalone mode; in inetd mode you should use an inetd server # that allows you to limit maximum number of processes per service # (such as xinetd) MaxInstances 20 # Disable sendfile by default since it breaks displaying the download speeds in # ftptop and ftpwho UseSendfile off # Define the log formats LogFormat default "%h %l %u %t \"%r\" %s %b" LogFormat auth "%v [%P] %h %t \"%r\" %s" # Dynamic Shared Object (DSO) loading # See README.DSO and howto/DSO.html for more details # # General database support (http://www.proftpd.org/docs/contrib/mod_sql.html) # LoadModule mod_sql.c # # Support for base-64 or hex encoded MD5 and SHA1 passwords from SQL tables # (contrib/mod_sql_passwd.html) # LoadModule mod_sql_passwd.c # # Mysql support (requires proftpd-mysql package) # (http://www.proftpd.org/docs/contrib/mod_sql.html) # LoadModule mod_sql_mysql.c # # Postgresql support (requires proftpd-postgresql package) # (http://www.proftpd.org/docs/contrib/mod_sql.html) # LoadModule mod_sql_postgres.c # # Quota support (http://www.proftpd.org/docs/contrib/mod_quotatab.html) # LoadModule mod_quotatab.c # # File-specific "driver" for storing quota table information in files # (http://www.proftpd.org/docs/contrib/mod_quotatab_file.html) # LoadModule mod_quotatab_file.c # # SQL database "driver" for storing quota table information in SQL tables # (http://www.proftpd.org/docs/contrib/mod_quotatab_sql.html) # LoadModule mod_quotatab_sql.c # # LDAP support (requires proftpd-ldap package) # (http://www.proftpd.org/docs/directives/linked/config_ref_mod_ldap.html) # LoadModule mod_ldap.c # # LDAP quota support (requires proftpd-ldap package) # (http://www.proftpd.org/docs/contrib/mod_quotatab_ldap.html) # LoadModule mod_quotatab_ldap.c # # Support for authenticating users using the RADIUS protocol # (http://www.proftpd.org/docs/contrib/mod_radius.html) # LoadModule mod_radius.c # # Retrieve quota limit table information from a RADIUS server # (http://www.proftpd.org/docs/contrib/mod_quotatab_radius.html) # LoadModule mod_quotatab_radius.c # # SITE CPFR and SITE CPTO commands (analogous to RNFR and RNTO), which can be # used to copy files/directories from one place to another on the server # without having to transfer the data to the client and back # (http://www.castaglia.org/proftpd/modules/mod_copy.html) # LoadModule mod_copy.c # # Administrative control actions for the ftpdctl program # (http://www.proftpd.org/docs/contrib/mod_ctrls_admin.html) LoadModule mod_ctrls_admin.c # # Support for MODE Z commands, which allows FTP clients and servers to # compress data for transfer # (http://www.castaglia.org/proftpd/modules/mod_deflate.html) # LoadModule mod_deflate.c # # Execute external programs or scripts at various points in the process # of handling FTP commands # (http://www.castaglia.org/proftpd/modules/mod_exec.html) # LoadModule mod_exec.c # # Support for POSIX ACLs # (http://www.proftpd.org/docs/modules/mod_facl.html) # LoadModule mod_facl.c # # Support for using the GeoIP library to look up geographical information on # the connecting client and using that to set access controls for the server # (http://www.castaglia.org/proftpd/modules/mod_geoip.html) # LoadModule mod_geoip.c # # Allow for version-specific configuration sections of the proftpd config file, # useful for using the same proftpd config across multiple servers where # different proftpd versions may be in use # (http://www.castaglia.org/proftpd/modules/mod_ifversion.html) # LoadModule mod_ifversion.c # # Configure server availability based on system load # (http://www.proftpd.org/docs/contrib/mod_load.html) # LoadModule mod_load.c # # Limit downloads to a multiple of upload volume (see README.ratio) # LoadModule mod_ratio.c # # Rewrite FTP commands sent by clients on-the-fly, # using regular expression matching and substitution # (http://www.proftpd.org/docs/contrib/mod_rewrite.html) # LoadModule mod_rewrite.c # # Support for the SSH2, SFTP, and SCP protocols, for secure file transfer over # an SSH2 connection (http://www.castaglia.org/proftpd/modules/mod_sftp.html) # LoadModule mod_sftp.c # # Use PAM to provide a 'keyboard-interactive' SSH2 authentication method for # mod_sftp (http://www.castaglia.org/proftpd/modules/mod_sftp_pam.html) # LoadModule mod_sftp_pam.c # # Use SQL (via mod_sql) for looking up authorized SSH2 public keys for user # and host based authentication # (http://www.castaglia.org/proftpd/modules/mod_sftp_sql.html) # LoadModule mod_sftp_sql.c # # Provide data transfer rate "shaping" across the entire server # (http://www.castaglia.org/proftpd/modules/mod_shaper.html) # LoadModule mod_shaper.c # # Support for miscellaneous SITE commands such as SITE MKDIR, SITE SYMLINK, # and SITE UTIME (http://www.proftpd.org/docs/contrib/mod_site_misc.html) # LoadModule mod_site_misc.c # # Provide an external SSL session cache using shared memory # (contrib/mod_tls_shmcache.html) # LoadModule mod_tls_shmcache.c # # Provide a memcached-based implementation of an external SSL session cache # (contrib/mod_tls_memcache.html) # LoadModule mod_tls_memcache.c # # Use the /etc/hosts.allow and /etc/hosts.deny files, or other allow/deny # files, for IP-based access control # (http://www.proftpd.org/docs/contrib/mod_wrap.html) # LoadModule mod_wrap.c # # Use the /etc/hosts.allow and /etc/hosts.deny files, or other allow/deny # files, as well as SQL-based access rules, for IP-based access control # (http://www.proftpd.org/docs/contrib/mod_wrap2.html) # LoadModule mod_wrap2.c # # Support module for mod_wrap2 that handles access rules stored in specially # formatted files on disk # (http://www.proftpd.org/docs/contrib/mod_wrap2_file.html) # LoadModule mod_wrap2_file.c # # Support module for mod_wrap2 that handles access rules stored in SQL # database tables (http://www.proftpd.org/docs/contrib/mod_wrap2_sql.html) # LoadModule mod_wrap2_sql.c # # Implement a virtual chroot capability that does not require root privileges # (http://www.castaglia.org/proftpd/modules/mod_vroot.html) # Using this module rather than the kernel's chroot() system call works # around issues with PAM and chroot (http://bugzilla.redhat.com/506735) LoadModule mod_vroot.c # # Provide a flexible way of specifying that certain configuration directives # only apply to certain sessions, based on credentials such as connection # class, user, or group membership # (http://www.proftpd.org/docs/contrib/mod_ifsession.html) # LoadModule mod_ifsession.c # Allow only user root to load and unload modules, but allow everyone # to see which modules have been loaded # (http://www.proftpd.org/docs/modules/mod_dso.html#ModuleControlsACLs) ModuleControlsACLs insmod,rmmod allow user root ModuleControlsACLs lsmod allow user * # Enable basic controls via ftpdctl # (http://www.proftpd.org/docs/modules/mod_ctrls.html) ControlsEngine on ControlsACLs all allow user root ControlsSocketACL allow user * ControlsLog /var/log/proftpd/controls.log # Enable admin controls via ftpdctl # (http://www.proftpd.org/docs/contrib/mod_ctrls_admin.html) <IfModule mod_ctrls_admin.c> AdminControlsEngine on AdminControlsACLs all allow user root </IfModule> # Enable mod_vroot by default for better compatibility with PAM # (http://bugzilla.redhat.com/506735) <IfModule mod_vroot.c> VRootEngine on </IfModule> # TLS (http://www.castaglia.org/proftpd/modules/mod_tls.html) <IfDefine TLS> TLSEngine on TLSRequired on TLSRSACertificateFile /etc/pki/tls/certs/proftpd.pem TLSRSACertificateKeyFile /etc/pki/tls/certs/proftpd.pem TLSCipherSuite ALL:!ADH:!DES TLSOptions NoCertRequest TLSVerifyClient off #TLSRenegotiate ctrl 3600 data 512000 required off timeout 300 TLSLog /var/log/proftpd/tls.log <IfModule mod_tls_shmcache.c> TLSSessionCache shm:/file=/var/run/proftpd/sesscache </IfModule> </IfDefine> # Dynamic ban lists (http://www.proftpd.org/docs/contrib/mod_ban.html) # Enable this with PROFTPD_OPTIONS=-DDYNAMIC_BAN_LISTS in /etc/sysconfig/proftpd <IfDefine DYNAMIC_BAN_LISTS> LoadModule mod_ban.c BanEngine on BanLog /var/log/proftpd/ban.log BanTable /var/run/proftpd/ban.tab # If the same client reaches the MaxLoginAttempts limit 2 times # within 10 minutes, automatically add a ban for that client that # will expire after one hour. BanOnEvent MaxLoginAttempts 2/00:10:00 01:00:00 # Inform the user that it's not worth persisting BanMessage "Host %a has been banned" # Allow the FTP admin to manually add/remove bans BanControlsACLs all allow user ftpadm </IfDefine> # Set networking-specific "Quality of Service" (QoS) bits on the packets used # by the server (contrib/mod_qos.html) <IfDefine QOS> LoadModule mod_qos.c # RFC791 TOS parameter compatibility QoSOptions dataqos throughput ctrlqos lowdelay # For a DSCP environment (may require tweaking) #QoSOptions dataqos CS2 ctrlqos AF41 </IfDefine> # Global Config - config common to Server Config and all virtual hosts # See: http://www.proftpd.org/docs/howto/Vhost.html <Global> # Umask 022 is a good standard umask to prevent new dirs and files # from being group and world writable Umask 022 # Allow users to overwrite files and change permissions AllowOverwrite yes <Limit ALL SITE_CHMOD> AllowAll </Limit> DefaultRoot ~ </Global> # A basic anonymous configuration, with an upload directory # Enable this with PROFTPD_OPTIONS=-DANONYMOUS_FTP in /etc/sysconfig/proftpd <IfDefine ANONYMOUS_FTP> <Anonymous ~ftp> User ftp Group ftp AccessGrantMsg "Anonymous login ok, restrictions apply." # We want clients to be able to login with "anonymous" as well as "ftp" UserAlias anonymous ftp # Limit the maximum number of anonymous logins MaxClients 10 "Sorry, max %m users -- try again later" # Put the user into /pub right after login #DefaultChdir /pub # We want 'welcome.msg' displayed at login, '.message' displayed in # each newly chdired directory and tell users to read README* files. DisplayLogin /welcome.msg DisplayChdir .message DisplayReadme README* # Cosmetic option to make all files appear to be owned by user "ftp" DirFakeUser on ftp DirFakeGroup on ftp # Limit WRITE everywhere in the anonymous chroot <Limit WRITE SITE_CHMOD> DenyAll </Limit> # An upload directory that allows storing files but not retrieving # or creating directories. <Directory uploads/*> AllowOverwrite no <Limit READ> DenyAll </Limit> <Limit STOR> AllowAll </Limit> </Directory> # Don't write anonymous accesses to the system wtmp file (good idea!) WtmpLog off # Logging for the anonymous transfers ExtendedLog /var/log/proftpd/access.log WRITE,READ default ExtendedLog /var/log/proftpd/auth.log AUTH auth </Anonymous> </IfDefine> <VirtualHost 192.168.0.4> ServerName esdev.wiantech.net <Anonymous /home/esdev/ftp> User ftp Group ftp UserAlias anonymous ftp <Limit WRITE> DenyAll </Limit> RequireValidShell off ExtendedLog /home/esdev/logs/ftp.log </Anonymous> </VirtualHost> <VirtualHost 122.176.97.151> </VirtualHost> <VirtualHost 122.176.97.151> ServerName "klarify15.wiantech.net" </VirtualHost> <VirtualHost 122.176.97.151> ServerName "klarify15.wiantech.net" </VirtualHost>
You can use Port directive: Port <port number> A note that if you set port to N, then you must let port N-1 available too. RFC959 defined that source port for active data transfer must be N-1. You can use Port directive both in server context or virtual server context. Setting Port 0 disable server.
How to change the ftp server port in ProFTPD
1,501,174,611,000
So using vsftpd I want to lock a user to /home/theirname/Minecraft. I can't change their home directory because a program I'm using(McMyAdmin) will try to reinstall itself but I don't want them having access to the programs configs.
This tutorial would seem to be what you're looking for, titled: Setup Virtual Users and Directories in VSFTPD. excerpt In /etc/vsftpd.conf. listen=YES anonymous_enable=NO local_enable=YES virtual_use_local_privs=YES write_enable=YES connect_from_port_20=YES secure_chroot_dir=/var/run/vsftpd pam_service_name=vsftpd guest_enable=YES user_sub_token=$USER local_root=/var/www/sites/$USER chroot_local_user=YES hide_ids=YES You'll likely need to customize this slightly based on what your needs are. You'll want to change the local_root line for starters: local_root=/home/$USER/Minecraft
vsftpd limit users to /home/user/minecraft
1,501,174,611,000
I was not able to ftp to my remote server. I read Ubuntu doc to see how to resolve: https://help.ubuntu.com/10.04/serverguide/ftp-server.html Just now I installed vsftpd on my remote machine. Now when I run ftp rs it's saying : ravbholua@ravbholua-Aspire-5315:~$ ftp rs Connected to ravi.com. 220 (vsFTPd 3.0.2) 331 Please specify the password. 530 Login incorrect. Login failed. ftp> You see here it says to specify a password but it is not waiting for me to input password. Automatically it proceeds ahead and messages login failed. How to resolve? (Also I would appreciate if any material/link can be provided to me) EDIT @dru please see the content of the configuration file $ cat /etc/vsftpd.conf # Example config file /etc/vsftpd.conf # # The default compiled in settings are fairly paranoid. This sample file # loosens things up a bit, to make the ftp daemon more usable. # Please see vsftpd.conf.5 for all compiled in defaults. # # READ THIS: This example file is NOT an exhaustive list of vsftpd options. # Please read the vsftpd.conf.5 manual page to get a full idea of vsftpd's # capabilities. # # # Run standalone? vsftpd can run either from an inetd or as a standalone # daemon started from an initscript. listen=YES # # Run standalone with IPv6? # Like the listen parameter, except vsftpd will listen on an IPv6 socket # instead of an IPv4 one. This parameter and the listen parameter are mutually # exclusive. #listen_ipv6=YES # # Allow anonymous FTP? (Disabled by default) anonymous_enable=NO # # Uncomment this to allow local users to log in. local_enable=YES # # Uncomment this to enable any form of FTP write command. #write_enable=YES # # Default umask for local users is 077. You may wish to change this to 022, # if your users expect that (022 is used by most other ftpd's) #local_umask=022 # # Uncomment this to allow the anonymous FTP user to upload files. This only # has an effect if the above global write enable is activated. Also, you will # obviously need to create a directory writable by the FTP user. #anon_upload_enable=YES # # Uncomment this if you want the anonymous FTP user to be able to create # new directories. #anon_mkdir_write_enable=YES # # Activate directory messages - messages given to remote users when they # go into a certain directory. dirmessage_enable=YES # # If enabled, vsftpd will display directory listings with the time # in your local time zone. The default is to display GMT. The # times returned by the MDTM FTP command are also affected by this # option. use_localtime=YES # # Activate logging of uploads/downloads. xferlog_enable=YES # # Make sure PORT transfer connections originate from port 20 (ftp-data). connect_from_port_20=YES # # If you want, you can arrange for uploaded anonymous files to be owned by # a different user. Note! Using "root" for uploaded files is not # recommended! #chown_uploads=YES #chown_username=whoever # # You may override where the log file goes if you like. The default is shown # below. #xferlog_file=/var/log/vsftpd.log # # If you want, you can have your log file in standard ftpd xferlog format. # Note that the default log file location is /var/log/xferlog in this case. #xferlog_std_format=YES # # You may change the default value for timing out an idle session. #idle_session_timeout=600 # # You may change the default value for timing out a data connection. #data_connection_timeout=120 # # It is recommended that you define on your system a unique user which the # ftp server can use as a totally isolated and unprivileged user. #nopriv_user=ftpsecure # # Enable this and the server will recognise asynchronous ABOR requests. Not # recommended for security (the code is non-trivial). Not enabling it, # however, may confuse older FTP clients. #async_abor_enable=YES # # By default the server will pretend to allow ASCII mode but in fact ignore # the request. Turn on the below options to have the server actually do ASCII # mangling on files when in ASCII mode. # Beware that on some FTP servers, ASCII support allows a denial of service # attack (DoS) via the command "SIZE /big/file" in ASCII mode. vsftpd # predicted this attack and has always been safe, reporting the size of the # raw file. # ASCII mangling is a horrible feature of the protocol. #ascii_upload_enable=YES #ascii_download_enable=YES # # You may fully customise the login banner string: #ftpd_banner=Welcome to blah FTP service. # # You may specify a file of disallowed anonymous e-mail addresses. Apparently # useful for combatting certain DoS attacks. #deny_email_enable=YES # (default follows) #banned_email_file=/etc/vsftpd.banned_emails # # You may restrict local users to their home directories. See the FAQ for # the possible risks in this before using chroot_local_user or # chroot_list_enable below. #chroot_local_user=YES # # You may specify an explicit list of local users to chroot() to their home # directory. If chroot_local_user is YES, then this list becomes a list of # users to NOT chroot(). # (Warning! chroot'ing can be very dangerous. If using chroot, make sure that # the user does not have write access to the top level directory within the # chroot) #chroot_local_user=YES #chroot_list_enable=YES # (default follows) #chroot_list_file=/etc/vsftpd.chroot_list # # You may activate the "-R" option to the builtin ls. This is disabled by # default to avoid remote users being able to cause excessive I/O on large # sites. However, some broken FTP clients such as "ncftp" and "mirror" assume # the presence of the "-R" option, so there is a strong case for enabling it. #ls_recurse_enable=YES # # Customization # # Some of vsftpd's settings don't fit the filesystem layout by # default. # # This option should be the name of a directory which is empty. Also, the # directory should not be writable by the ftp user. This directory is used # as a secure chroot() jail at times vsftpd does not require filesystem # access. secure_chroot_dir=/var/run/vsftpd/empty # # This string is the name of the PAM service vsftpd will use. pam_service_name=vsftpd # # This option specifies the location of the RSA certificate to use for SSL # encrypted connections. rsa_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem # This option specifies the location of the RSA key to use for SSL # encrypted connections. rsa_private_key_file=/etc/ssl/private/ssl-cert-snakeoil.key
Not quite sure why vsftpd.conf is quoted as looking okay above... The question asks why anonymous FTP doesn't work, and the line "anonymous_enable=NO" is in the config file, so thats one point of note. Since the FTP client didn't bother to prompt you for a password, I'd assume its "auto" logging in as anonymous - since decades gone by the standard for this is the username "anonymous" or "ftp" and an email address as the password (which everyone always fakes). I don't know what ftp client you have connected to your "ftp" command, but if its something like ncftp it will just anon login unless you put "-u username" in the parameters. Thats my two cents on both parts anyway, need more to go on to understand your ftp client's behaviour really.
Not able to ftp remote server anonymously
1,501,174,611,000
I need to back up ~0.5 TB of files (about 600,000 of them) from a Windows Server 2003 machine to a CentOS 6.4 machine across a 100MB network, and update them once a night. The backup script would be hosted on CentOS. Will SlimFTPd server+standard CentOS ftp client, or Windows AD fileshare+Samba 4 client be faster for synchronizing the backup via rsync? I.e. once it's going, say there are about 15 GB of changes, which one will be faster at comparing the directory structure to mirror them?
I would use rsync, because that only copies what has changed. I would install cygwin with openssh and rsync on the windows server, and use rsync over ssh to make the backup, with a command such as this: rsync -e ssh -var --progress --partial server:/cygdrive/c/myfiles $HOME/mybackup The advantage over using either of your ftp or samba options is that in this case rsync runs locally on both the centos box and the windows server collecting file names+sizes+timestamps and (if necessary) checksums, and only (those parts of) files and folders that are changed (or new) will be transferred. Cygwin can be downloaded from here http://www.cygwin.com/ Make sure you install openssh (server) and rsync. And this probably works to enable the ssh server after installing windows: http://www.noah.org/ssh/cygwin-sshd.html
rsync: samba vs ftp for backup
1,501,174,611,000
I am running a CentOS VPS. Now I have installed an FTP server on the Server like this: yum install vsftpd And I have restarted the service as well. However, when I point my browser to: ftp://myStaticIP it does not return anything. Also, how do I add an FTP user to my VPS so that I can log in via a client?
Normally on StackExchange sites, answers are supposed to be comprehensive and not merely link to other resources. However, you're asking two pretty basic, but very broad, questions here. How do I configure the CentOS firewall? How do I configure vsftpd on CentOS? I can't write an answer which gives you everything you need to know about those, but I strongly suggest you find some good documentation and have a thorough read. I don't use CentOS, so I have no direct experience of how it does things by default. You may find these sources useful, CentOS Firewall HowTo RedHat Guide to configuring FTP Services (on CentOS Site) CentOS HowTo for chrooted vsftpd I strongly encourage you to read the CentOS site, and read about the vsftpd service, and understand how to configure it, because even if someone on this site fixes your current issue, you'll run into another one and be back to ask about that. You're not asking about a fault or an issue, you're asking for basic information on how to complete the setup, and the best way to achieve that is to read the documentation. By all means, if you follow the documentation and something doesn't work, come back and formulate a question about that specific issue, but I urge you to not simply install services and hope they work without understanding how to configure them.
How to add FTP users to my CentOS VPS [closed]
1,501,174,611,000
I know the ftp username, password, and directory of the source. I just want to download the whole thing to a new server. Simple download. The username is not root. So I wonder how I would specify the directory name? Should I give the /home/username/public_html or should I give it as ~username/public_html
wget Use wget as follow wget --mirror --no-parent --user=<ftpuser> --password=<ftppassword> ftp://server/<directory path> It will download the whole directory recursively. Option --no-parent Do not ever ascend to the parent directory when retrieving recursively. This is a useful option, since it guarantees that only the files below a certain hierarchy will be downloaded. So the following wget --mirror --no-parent --user=<ftpuser> --password=<ftppassword> ftp://server/home/username/public_html will only download directory structure starting with public_html. Directory path You should login ftp server once to confirm the path. Depending on how ftp server is configure, the path may actually start within the home directory. In that case, the directory path will be /public_html only. Changing Directory Ownership Change the user and group of the downloaded directory with following command chown -R <user>:<group> public_html If you want to change to user www-data and group www-data chown -R www-data:www-data public_html You may also want to remove write permission for others/anybody chmod -R o-w public_html -R = recursively Category (can assign multiple without space) u = user g = group o = others = anybody Add/Remove "-" sign = remove permission following the sign, from category before the sign "+" sign = add permission following the sign, to category before the sign Permission (can assign multiple without space) r = read permission w = write permission x = execute permission Example ug+w = add write permission to user and group ugo-wx = remove write and execute permission from user, group and others
In FTP, can I specify a remote directory using `~username` syntax?
1,501,174,611,000
I am connecting to my server via FTP (that is the only way I have access to this server in particular). I need to utime a command via FTP to the server. Other commands work but when I try something like site for file in *;do utime 20120101084400 ${file}; done; to change all dates in a given directory, I receive an error that says usage: form format or invalid command depending on the variation of that command I try. I tried this command (without the site part) on my local unix and it works but not remotely via FTP. How do I do that? If there's a way to do that recursively it will be a bonus... :) thanks.
Allowed SITE commands vary with the FTP daemon in use (like, SITE CHMOD in vsftpd; some more in Apache's FtpServer). In general, it's not the remote shell facility you seem to expect. However, ProFTPd apparently supports SITE UTIME. So if your FTPd in question has SITE UTIME, you could try to SITE UTIME in a client-side loop, like this (pseudo-code, no server mentioned, also keep in mind that there might be spaces (needs quotation); also you'd need a versatile FTP client (lftp might be a good candidate)): for file in $(./ftp-get-directory-list) do ftp-client -c "SITE UTIME ${file}" done EDIT also see Gilles' nice answer here on the possible interaction between Bash scripts and lftp. Addendum If you're into some scripting language (e.g. Python), you could also do yourself a favor by using some FTP library.
Trying to Utime via FTP
1,501,174,611,000
There is a script (bsh) running in a AIX 7.1 machine. The script is to download some files via FTP from another machine. I need to stop it later because some guy need to have maintenace over the machine, even if the script is not finished. So I would like to know, if I re-run the script later, can I configure the ftp client to skip the file if a local copy already exist and is identical to the remote one?
Short answer no. FTP is a blunt tool. Use wget which does provide this functionality and can pull files from an FTP server. An even better option is to use rsync (over ssh).
ftp option to overwrite file if different size
1,501,174,611,000
I tried to rsh a shell script from OpenVms to a Red Hat linux. It seems that it is not executed. I created the shell script in OpenVms and Ftp it to the linux. I then ls -la the folder in linux: -rw-r--r-- 1 buedev buedev 382 Jul 20 11:03 files.sh It seems that even the owner don't have the right to execute. And if we need to chmod it, how can we do it remotely in OpenVms?
In order to execute, the execute bit must be set. Even the owner of a file cannot ask the system to execute it if it's not marked as executable. The one caveat here is that in the case of most shell scripts, you can execute them by calling the shell yourself and feeding the script data to it as an argument. /bin/sh /path/to/files.sh This would execute the sh shell and send it the text data for your script to be executed. This only requires read-permission on the file because the shell is what is being executed and it only needs to read the script not execute it. You can change the permissions of files that will be written by setting the umask in your ftp preferences, or use a shell later to chmod them. Some ftp daemons also support changing the permissions on existing files.
Do we need to chmod a shell script before it can be rsh
1,501,174,611,000
I'm working on a program that involves uploading some files via FTP. However, I'd like to abort the process if the username and password given by the user are incorrect. Is there any way to "test" if the pair is correct? Is a utility like ftp or curl able to do that, or should I use some feature (apparently hidden at my eyes) from libcurl? Any ideas are really appreciated. Maybe this sound like a question coming from a dummy programmer, but indeed I've never dealt with FTP (or any other protocol) directly. Please consider writing "you didn't get the point" or "you're crazy" if needed.
The short answer is - no you will need to try the FTP connection via libcurl and see if the authentication succeeds. The username/password only exist on the remote server, and you don't know if they are being changed or altered at any stage (for legitimate reasons). Hence, your code will have to take credentials from the user, and basically try an FTP connection. You could try an FTP operation which doesn't transfer data (i.e. just connect and then disconnect, or connect and do an ls then disconnect), which will allow libcurl to report an issue if authentication fails. Outside of that, no you can't realistically pre-authorise the credentials.
Test FTP Username and Password
1,501,174,611,000
I'm writing a perl program where I want to accept all kinds of "file names" from the user, including /home/foo/bar.txt and scp://server/some/file.txt or whatever. And then I thought that if I can find a cmd line program that does it, I'll find a perl module that does it too, or I'll just run it through perl's system() call. I thought of cURL and/or lftp, but on ubuntu, I get: > curl scp://server/some/file curl: (1) Protocol scp not supported or disabled in libcurl > lftp -c 'get scp://server/some/file' get: scp - not supported protocol For curl I investigated why that was on Ubuntu and found curl and pycurl is not compiled with sftp support - basically it ain't gonna happen unless I recompile libcurl*, but I don't want to require my users to apply a patch to libcurl*. Sure, I can write it myself as in: if ($proto eq 'scp') { # handle scp open I, 'scp ...' or die; } elsif ($proto eq 'http') { open I, 'wget ...' or die; } else { # whatever } But I'd much rather somebody else write/test/debug that with a huge list of protocols and I'm surprised I couldn't find anything that does this out-of-the-box! Do you know of any that I missed?
I believe protocol name is sftp, not scp. In my system, following works: lftp -c 'get sftp://someserver/file', as well as ftp and http.
Need command-line program to download all of http:// https:// scp:// ftp:// style links (on e.g. ubuntu)
1,603,959,433,000
I need to connect to a remote Linux server over SFTP and read the entire folder structure of the server. I would like to list all directories, even directories with no read permissions, and preferably without changing the permissions of the directories. I would like to avoid using root, and I think that a good solution could be to create a "read only" root. When searching for a solution online, I found the CAP_DAC_READ_SEARCH capability. However, capabilities require me to specify a filename, and I don't think I would like the capability to apply to the SSHD process. How can I create a user with the needed permissions?
sftpsrv=/usr/libexec/openssh/sftp-server cp -a ${sftpsrv} ${sftpsrv}.super chmod 500 ${sftpsrv}.super chown someuser ${sftpsrv}.super /sbin/setcap cap_dac_read_search+ep ${sftpsrv}.super You'll need to connect to your server this way: sftp -s /usr/libexec/openssh/sftp-server.super address And it works! ls -la /tmp | grep TEST drwx------. 2 root root 60 Oct 29 13:08 TEST sftp -s /usr/libexec/openssh/sftp-server.super localhost user@localhost^s password: Connected to localhost. sftp> cd /tmp/TEST sftp> ls 123 /tmp/TEST is owned by root and has 700 permissions. Here's a possible untested solution in case your client is unable to request a custom sftp-server binary: Match User someuser Subsystem sftp /usr/libexec/openssh/sftp-server.super
Creating a user with read permissions to all directories in a Linux server
1,603,959,433,000
I was testing bache on raspberry pi 4 with ubuntu. The reason I choose ubuntu that I found standard raspbian got some issues with bcache as kernel module not properly loaded. I tried to troubleshoot bit but then I move to ubuntu and it works straight away My setup is like this. 1 x 1TB HGST 5400RPM 2.5 laptop hard disk 1 x 256GB WD Green 2.5 SSD Raspberry pi 4 4GB model with large heat-sink for cooling and 4A power. I hooked up both HDD and SSD to the raspberry pi (both externally powered) using USB 3.0 ports and boot to ubuntu. First I tested the the under-voltage errors and found all normal. SSD -> /dev/sda HDD -> /dev/sdb Then I create 1 partition on both drives and create the bcache as follows. make-bcache -B /dev/sdb1 make-bcache -C /dev/sda1 then I mount the /dev/bcache0 on /datastore then I attached the cache device as follows echo MYUUID > /sys/block/bcache0/bcache/attach Then I enabled write-back cache echo writeback > /sys/block/bcache0/bcache/cache_mode Then I installed vsftpd server and make the root ftp dir as my bcache0 mount point and I started testing. First few tests I can upload files 113MBps and I notices most of the files directly write in to the backing device even if the cache is attached. when I tested the status using bcache-status script https://gist.github.com/damoxc/6267899 I saw most of the writes misses cache and directly writing to backing device and the 113MBps is directly from the mechanical hard drive :-O ? Then I started to fine tune. As suggested on Troubleshooting performance part of this https://www.kernel.org/doc/Documentation/bcache.txt document first I set sequential_cutoff to zero by executing this command echo 0 > /sys/block/bcache0/bcache/sequential_cutoff After this I can instantly see SSD device cache hits are increased. And at the same time I was running iostat continuously. And I was able to see from the iostat SSD is directly being accessed. But after few minutes my filezilla client hangs and I cannot restart the FTP upload stream. And when I try to access the bcache0 mount it's really slow. cache status was showing as "dirty" Then I restart the pi and again attached the device. and set below stetting echo 0 > /sys/fs/bcache/MYUUID/congested_read_threshold_us echo 0 > /sys/fs/bcache/MYUUID/congested_write_threshold_us According to https://www.kernel.org/doc/Documentation/bcache.txt article this is for avoid bcache track backing device latency. But even after this option. my FTP upload stream continuously crashing. Then I set all back to default. Still with large number of file uploads it crashes And I noticed within the test pi CPU is not fully utilized. The maximum throughput I can get using pi 4 1Gbps Ethernet is 930Mbps, which is extremely good. The HGST drive when I tested with crystal disk mark with NTFS able to write up to 90MBps. It seems I can get 113MBps on pi since the file system is ext4. If I can get more than 80MBps ftp upload speed I'm ok with that. My questions are Why FTP stream keep crashing when using with bcache and why bcache mount getting slow overtime. why there is very low cache usage even with sequential_cutoff set to 0 has anyone tested bcache before with Raspberry PI 4 ? if yes how can I use the SSD for caching properly And finally can someone explain more about how bcache actually works when It is on writeback mode. I only use this for archival data and I don't need access hot data on SSD kind of setup.
I managed to solve this issue with the instructions of https://www.raspberrypi.org/forums/viewtopic.php?t=245931 this topic. This is due to Raspberry PI 4 USB 3.0 UASP driver issue and it make my external SSD connection intermittent. After adding line to cmdline.txt for ignore the UAS interface my SSD is working flawlessly as well as bcache Basically you need to find your external USB 3.0 SSD / Enclosure VID and PID lsusb Then I had to edit the cmdline.txt and add the following line end of the file. where aaaa is equal to VID and bbbb is equal to PID usb-storage.quirks=aaaa:bbbb:u Then reboot the pi. After reboot my SSD is stable and I cannot see any errors regarding the UAS interface in my kern.log Other than this mentioned bcache setup is working flawlessly with Raspberry pi 4. I use Ubuntu for testing
Testing bcache with raspberry pi 4 on ubuntu
1,603,959,433,000
I have an Apache web server running on this machine with Debian stable, this PHP program have to connect using FTP to a remote online server among other things. Everything worked fine until I installed an FTP server (vsftpd version 3.0.3-8+b1), but I don't think this is the culprit. Now I get this error The exact error/s are: I won't open a connection to x.x.x.x --->*remote server ip* (only to x.x.x.x ---->*my public ip*) And can't even ls remote directories, getting this error instead: Unable to list directory I suspect this is because FTP runs in active mode "by default" now, but I am really not sure. Those errors seems something related with passive mode. What do you think? Debian version: Debian GNU/Linux 9 PHP version: 7.0.30-0+deb9u1 The PHP routines we are talking about are: ftp_login() and ftp_fput() I think it's auto active mode because everytime I use ftp_pasv(, true) it does work fine. Didn't have to specify before.
Issue at Hand You are reporting that you cannot connect to your remote ftp server. You receive the following error message: I won't open a connection to x.x.x.x remote server ip (only to x.x.x.x my public ip) Depending on if you are using a php ftp or vsftpd solution as your ftp server solution, I may have found 2 possible solutions. 1. Possible VSFTP Configuration Fix Potentially you need to only correct your vsftp configuration. I have found this forum post by user nhtrader that could provide a possible solution. Please read over it in its entirety to make sure it applies to you. First off, Is this an ftp server located remotely or within your home/LAN? Verify that you can authenticate, or at least communicate to the server without using your ftp service. Second, verify that your router can support port forwarding and add the relevant ports you need to the router's port forwarding list. If you are using a VPS for the ftp server, please reference their documentation regarding port forwarding and support methods for ftp, authentication, remote management, etc. The GRC, provides a decent tool to verify if your home network is blocking any ports or services you will require. This can be a good check to verify that your port forwarding works and is not blocked by your ISP. You will also need to create firewall rules on both your host and server to allow for connections via the ports you wish to use. Third, edit your vsftpd configuration file located at /etc/vsftpd.conf. Add the following entries to the file using sudo nano /etc/vsftpd.conf: listen=YES anonymous_enable=NO local_enable=YES write_enable=YES dirmessage_enable=YES use_localtime=YES xferlog_enable=YES idle_session_timeout=600 data_connection_timeout=120 ftpd_banner=[Whatever message of your choice] secure_chroot_dir=/var/run/vsftpd/empty pam_service_name=vsftpd rsa_cert_file=/etc/ssl/private/vsftpd.pem The following entries can be up to you as what ports you want to use, these are just the ones user nhtrader used. listen_port=26 pasv_max_port=7004 pasv_min_port=7000 If you are using a dynamic IP address and do not have static use this entry: pasv_addr_resolve=YES If you are using a static IP address use this instead: pasv_address=x.x.x.x Save and close the configuration file and restart the vsftpd service. sudo systemctl restart vsftpd.service Now you should be able to connect to your ftp server properly. Again, I suggest looking over the entirety of the forum post as it covers a lot of information and possible issues. 2. Possible PHP Fix According user Martin Prikryl in this stack overflow post, you can solve that exact error message by moving ftp_pav after ftp_login in the php module you are using on your server. $conn_id = ftp_connect('x.x.x.x'); ftp_login($conn_id, 'user', 'pass'); ftp_pasv($conn_id, true); He found this answer by consulting the php documentation. Please read over all the links to make sure this is an applicable fix for you. Conclusion First, verify that your vsftpd or other ftp server configuration is correct, that your firewall allows ftp connections and that you can even connect to your host. Consult your VPS or router documentation for best practices in doing this. I will be including a link to the vsftpd manpage for reference as well. It has a link to a page explaining what each entry of the configuration file does. Please comment if you have any questions or issues with this answer. I highly suggest you read through each link I have provided thoroughly before attempting the commands. I appreciate feedback to correct any misconceptions and to improve my posts. I can update my answer as needed. Best of Luck!
ftp passive mode and php
1,603,959,433,000
I have vsftpd running on Ubuntu 16.04 LTS. During installation a ftp user is created with a home directory of /srv/ftp and hence this is the default FTP directory. Here are my vsftpd.conf file permissions that I've set. listen_ipv6=YES anonymous_enable=YES local_enable=YES write_enable=YES local_umask=022 anon_umask=011 anon_upload_enable=YES anon_mkdir_write_enable=YES dirmessage_enable=YES use_localtime=YES xferlog_enable=YES connect_from_port_20=YES chroot_local_user=YES allow_writeable_chroot=YES secure_chroot_dir=/var/run/vsftpd/empty pam_service_name=vsftpd What I'm trying to do is upload files as an anonymous user to the ftp server. I am able to login as an anonymous user but when I'm trying to upload, I'm getting, 200 PORT command successful. Consider using PASV. 553 Could not create file. Now there are numerous sources on the internet who are getting the same error but none of the solutions are solving my error. I know there is something about the permissions that I'm missing. The /srv/ftp permissions are set to 755.
I have installed vsftpd, filezilla, went through your .conf and added options accordingly: $ sudo cat /etc/vsftpd/vsftpd.conf | grep -v "#" anonymous_enable=YES local_enable=YES write_enable=YES local_umask=022 anon_upload_enable=YES anon_mkdir_write_enable=YES dirmessage_enable=YES xferlog_enable=YES connect_from_port_20=YES chown_uploads=YES chown_username=abdullah xferlog_std_format=YES chroot_local_user=YES listen=NO listen_ipv6=YES pam_service_name=vsftpd userlist_enable=YES tcp_wrappers=YES filezilla did give some feedback and I had to change the option chown_username=abdullah with my existing user name. then I run into permission problem, which is solved by changing the ownership of the ftp folder /var/ftp/pub from root to ftp. Then, I was able to upload & bind the files but not modify them, since we have a umask option.
Not able to upload as anonymous user in vsftpd
1,603,959,433,000
So I'm trying to setup a ftp service and send some files through my C# application to it. I can upload files to the main folder /www/ just fine, but I can't figure out how to give permission to the user "kmsuser" group "ftpaccess" to upload files in the subdirectories in /www/. I tried the chgrp command and it looked like it worked, but I still can't write anything in these folders. So this is what my ftp settings looks like. I uploaded those .txt files through my C# application with no problems on /www/, but even though the folders 'Comercial', 'Financeiro' and 'RecursosHumanos' seems to be on the ftpaccess group, it still doesn't allow me to upload anything in it, even through the FileZilla software on linux. I'm also leaving the command line I used to include these folders to the group below. root -i chgrp ftpaccess /home/kmsuser/www/Comercial/ chgrp ftpaccess /home/kmsuser/www/Financeiro/ chgrp ftpaccess /home/kmsuser/www/RecursosHumanos/ chgrp -R ftpaccess /home/kmsuser/www/ Pls help!
The subfolders don't have group write permission (note the dash in the middle drwxr - xr-x). Use chmod g+w to add that to each of those (append folder name to the command just like with chgrp).
FTP Permissions
1,603,959,433,000
I have an issue with an API from an external provider. Their API should do an FTP push to an external server. However, it is failing. They say that the Append Function needs to be active/enabled as according to them - the push is failing because the file can't be created on the server (I realize that Append would just add to an existing file). I am running a Linux CentOS6 and am able to create new files via PHP. Can't seem to find that function anywhere online - does it even exist on Linux and if yes - how can I confirm it's enabled?
Append works by default for a stock vsftpd install on Centos with an authenticated login. $ sudo yum -y install vsftp ftp ... $ mkdir ~/tmp; cd ~/tmp $ echo hi > foo $ ftp localhost ... ftp> put foo ... ftp> ^Z $ cat ~/foo hi $ fg append foo foo ... ftp> ^Z $ cat ~/foo hi hi $ You'll need to debug the FTP connection (e.g. with wireshark) and review the server logs (under /var/log) to see what is going awry.
How to check if Append Function is activated/enabled on Linux server
1,603,959,433,000
I must connect to ftp to get some files occasionally. To do this I made a script. The problem is that the user is generic and all of my work uses this user in the local host, and the remote machine. I must connect it with my personal user so they can see my password in the script There is some way to avoid these?
If the script is run only when you are logged in, you could set the password in an environment variable that is read when the script is run - set it once per session instead of hard-coding it or prompting for it. For example: # log in to your session [user@host] export pass=1234abc [user@host] my-ftp.sh Note the extra space in front of the export command - this is an option in most shells (such as Bash or Zsh) that will prevent the command being recorded in the shells history. This would allow you to read the password from the environment variable ${pass} within the script; but not have the password recorded in a file on the shared host. Otherwise, short of prompting for the password every time the script is run, there's no real way to keep it secured for a shared user - everything that you have access to non-interactively as the shared user, so to will your colleagues. You could try saving the password in a file that is to be read in at runtime, but the shared user would still need access to the file and at most it would be security through obscurity (which isn't security at all).
Hide password ftp with the same user
1,603,959,433,000
I've installed Open Panel, which seems to ship with Pure FTP server. I added a linux user ftpuser, and now I can log in with it. I'd like to specify a directory to which this user starts with when it logs in. How can I achieve this?
You can use a program like usermod with its -d option if you have that installed: usermod -d /new/ftpuserhome ftpuser if you don't have that, you can also edit the /etc/passwd file as root and change the 6th field (the one before the last field (: is the field separator).
(Pure FTP) FTP User login directory
1,603,959,433,000
I cannot seem to retrieve a non-zero return code when calling the ftp macro. It doesn't matter what errors are encountered during the ftp macro execution e.g. directory doesn't exist, file doesn't exists etc. I'd love to know why. I'm using bash on Solaris. My .netrc file looks like so: machine myftp1 login xxxxxxxx password xxxxxxxxx macdef getASCIIfiles cd $1 hash prompt off get $2 Executing the following commands echo "\$ getASCIIfiles Scratch/mydir NON_EXISTANT_FILE.TXT" | ftp -i myftp1 echo $? produces the following output Hash mark printing on (8192 bytes/hash mark). Interactive mode on. NON_EXISTANT_FILE.TXT: The system cannot find the file specified. 0 Why is zero being returned?
The 'ftp' command does not seem to return other error codes than 0. An alternative solution would be to check the FTP return codes. There is some examples in how to do this here: https://stackoverflow.com/a/4442763
Return code is always 0 after running echo "\$ macroName " | ftp -i mymachine
1,603,959,433,000
uploaded file via filezilla to proftp server(on ubuntu 14.04 lts) but reported 550 error 550 download.html: Permission denied Error: Critical file transfer error I try to set the directory to chmod 777 but error is same ftp for download is OK your comment welcome
Exactly the same thing happened to me after upgrading from Precise Pangolin to Trusty Tahr. I investigated and it looks like /etc/vsftpd.conf, the FTP configuration file, was one of the configuration files amended during the upgrade. Specifically, this line: write_enable=YES which I had previously uncommented was now commented again. I uncommented it, restarted the FTP server (sudo restart vsftpd) and suddenly I could upload and amend file permissions again.
uploaded file via filezilla to proftp server(on ubuntu 14.04 lts) but reported 550 error
1,603,959,433,000
I'm trying to connect to an FTP site but with, e.g., wget: Logging in as anonymous ... Logged in! ==> SYST ... done. ==> PWD ... done. ==> TYPE I ... done. ==> CWD (1) /gcrypt/gnutls ... done. ==> SIZE v3.2 ... done. ==> PASV ... couldn't connect to 217.69.76.55 port 40258: Network is unreachable If I disable iptables, it works, so obviously that is the problem. Yet I'm sure I have everything set up properly: # Accept related, established... -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT -A OUTPUT -m state --state RELATED,ESTABLISHED -j ACCEPT # ftp/http(s) clients -A OUTPUT -p tcp -m multiport --dports 21,80,443,8080 -j ACCEPT -A OUTPUT -p udp --dport 21 -j ACCEPT What's wrong?
Iptables needs some kernel modules loaded in order for "RELATED, ESTABLISHED" to work. If your HTTP clients are okay, you obviously have some of them. > lsmod | grep conntrack nf_conntrack_ipv4 20258 6 nf_defrag_ipv4 12702 1 nf_conntrack_ipv4 xt_conntrack 12760 6 nf_conntrack 99996 2 xt_conntrack,nf_conntrack_ipv4 However, the one for ftp, nf_conntrack_ftp, is additional and, unlike a device or filesystem driver, will not be loaded automatically by the kernel. > modprobe nf_conntrack_ftp Should do it. There is not AFAIK a cross-distro method for autoloading modules at boot, but on Fedora you can add: IPTABLES_MODULES="nf_conntrack_ftp" to /etc/sysconfig/iptables-config. On other systems which use systemd but do not have this file, see man modules-load.d.
FTP download problems with iptables -- Port 21 connection is allowed but "Network is unreachable"
1,603,959,433,000
I added the below 2 lines in the end of the configuration file /etc/vsftpd.conf so as to deny a local user named 'tentenths' from loggin in the ftp server. I restarted the vsftpd service after the change. But still the user was permitted to login. Where am I mistaken! userlist_deny=YES userlist_file=/etc/vsftpd.denied_users The content of the above file is: ravbholua@ravi:~$ cat /etc/vsftpd.denied_users tentenths ravbholua@ravi:~$ I am referring this link for the same. Have a look at the whole conf. file: # Example config file /etc/vsftpd.conf # # The default compiled in settings are fairly paranoid. This sample file # loosens things up a bit, to make the ftp daemon more usable. # Please see vsftpd.conf.5 for all compiled in defaults. # # READ THIS: This example file is NOT an exhaustive list of vsftpd options. # Please read the vsftpd.conf.5 manual page to get a full idea of vsftpd's # capabilities. # # # Run standalone? vsftpd can run either from an inetd or as a standalone # daemon started from an initscript. listen=YES # # Run standalone with IPv6? # Like the listen parameter, except vsftpd will listen on an IPv6 socket # instead of an IPv4 one. This parameter and the listen parameter are mutually # exclusive. #listen_ipv6=YES # # Allow anonymous FTP? (Disabled by default) anonymous_enable=YES # # Uncomment this to allow local users to log in. local_enable=YES # # Uncomment this to enable any form of FTP write command. write_enable=YES # # Default umask for local users is 077. You may wish to change this to 022, # if your users expect that (022 is used by most other ftpd's) #local_umask=022 # # Uncomment this to allow the anonymous FTP user to upload files. This only # has an effect if the above global write enable is activated. Also, you will # obviously need to create a directory writable by the FTP user. anon_upload_enable=YES # # Uncomment this if you want the anonymous FTP user to be able to create # new directories. #anon_mkdir_write_enable=YES # # Activate directory messages - messages given to remote users when they # go into a certain directory. dirmessage_enable=YES # # If enabled, vsftpd will display directory listings with the time # in your local time zone. The default is to display GMT. The # times returned by the MDTM FTP command are also affected by this # option. use_localtime=YES # # Activate logging of uploads/downloads. xferlog_enable=YES # # Make sure PORT transfer connections originate from port 20 (ftp-data). connect_from_port_20=YES # # If you want, you can arrange for uploaded anonymous files to be owned by # a different user. Note! Using "root" for uploaded files is not # recommended! #chown_uploads=YES #chown_username=whoever # # You may override where the log file goes if you like. The default is shown # below. #xferlog_file=/var/log/vsftpd.log # # If you want, you can have your log file in standard ftpd xferlog format. # Note that the default log file location is /var/log/xferlog in this case. #xferlog_std_format=YES # # You may change the default value for timing out an idle session. #idle_session_timeout=600 # # You may change the default value for timing out a data connection. #data_connection_timeout=120 # # It is recommended that you define on your system a unique user which the # ftp server can use as a totally isolated and unprivileged user. #nopriv_user=ftpsecure # # Enable this and the server will recognise asynchronous ABOR requests. Not # recommended for security (the code is non-trivial). Not enabling it, # however, may confuse older FTP clients. #async_abor_enable=YES # # By default the server will pretend to allow ASCII mode but in fact ignore # the request. Turn on the below options to have the server actually do ASCII # mangling on files when in ASCII mode. # Beware that on some FTP servers, ASCII support allows a denial of service # attack (DoS) via the command "SIZE /big/file" in ASCII mode. vsftpd # predicted this attack and has always been safe, reporting the size of the # raw file. # ASCII mangling is a horrible feature of the protocol. #ascii_upload_enable=YES #ascii_download_enable=YES # # You may fully customise the login banner string: #ftpd_banner=Welcome to blah FTP service. # # You may specify a file of disallowed anonymous e-mail addresses. Apparently # useful for combatting certain DoS attacks. #deny_email_enable=YES # (default follows) #banned_email_file=/etc/vsftpd.banned_emails # # You may restrict local users to their home directories. See the FAQ for # the possible risks in this before using chroot_local_user or # chroot_list_enable below. chroot_local_user=NO # # You may specify an explicit list of local users to chroot() to their home # directory. If chroot_local_user is YES, then this list becomes a list of # users to NOT chroot(). # (Warning! chroot'ing can be very dangerous. If using chroot, make sure that # the user does not have write access to the top level directory within the # chroot) #chroot_local_user=YES chroot_list_enable=YES # (default follows) chroot_list_file=/etc/vsftpd.chroot_list # # You may activate the "-R" option to the builtin ls. This is disabled by # default to avoid remote users being able to cause excessive I/O on large # sites. However, some broken FTP clients such as "ncftp" and "mirror" assume # the presence of the "-R" option, so there is a strong case for enabling it. #ls_recurse_enable=YES # # Customization # # Some of vsftpd's settings don't fit the filesystem layout by # default. # # This option should be the name of a directory which is empty. Also, the # directory should not be writable by the ftp user. This directory is used # as a secure chroot() jail at times vsftpd does not require filesystem # access. secure_chroot_dir=/var/run/vsftpd/empty # # This string is the name of the PAM service vsftpd will use. pam_service_name=vsftpd # # This option specifies the location of the RSA certificate to use for SSL # encrypted connections. rsa_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem # This option specifies the location of the RSA key to use for SSL # encrypted connections. rsa_private_key_file=/etc/ssl/private/ssl-cert-snakeoil.key userlist_deny=YES userlist_file=/etc/vsftpd.denied_users
You also need to add this configuration option: userlist_enable=YES Details userlist_deny — When used in conjunction with the userlist_enable directive and set to NO, all local users are denied access unless the username is listed in the file specified by the userlist_file directive. Because access is denied before the client is asked for a password, setting this directive to NO prevents local users from submitting unencrypted passwords over the network. The default value is YES. userlist_enable — When enabled, the users listed in the file specified by the userlist_file directive are denied access. Because access is denied before the client is asked for a password, users are prevented from submitting unencrypted passwords over the network. The default value is NO, however under Red Hat Enterprise Linux the value is set to YES. userlist_file — Specifies the file referenced by vsftpd when the userlist_enable directive is enabled.
Not able to deny selected local user to login ftp server
1,603,959,433,000
I have a ftp server running on the machine 192.168.122.50 Below are the last three directives from /etc/vsftpd/vsftpd.conf file pam_service_name=vsftpd #userlist_enable=YES #userlist_deny=YES Below is the content of /etc/pam.d/vsftpd file #%PAM-1.0 session optional pam_keyinit.so force revoke auth required pam_listfile.so item=user sense=deny file=/etc/vsftpd/ftpusers onerr=succeed auth required pam_shells.so auth include password-auth account include password-auth session required pam_loginuid.so session include password-auth Below is the content of the /etc/vsftpd/ftpusers file #This is the list of blocked users as per the following line /etc/pam.d/vsftpd #auth required pam_listfile.so item=user sense=deny file=/etc/vsftpd/ftpusers onerr=succeed root bin daemon adm lp sync shutdown halt mail news uucp operator games nobody So two users, for example, root should be blocked from accessing the ftp server at this machine and ssam should be allowed. Now from client machine 192.168.0.2, I tried this. am@centos ~]$ ftp 192.168.122.50 Connected to 192.168.122.50 (192.168.122.50). 220 (vsFTPd 2.2.2) Name (192.168.122.50:ssam): ssam 331 Please specify the password. Password: 230 Login successful. Remote system type is UNIX. Using binary mode to transfer files. ftp> exit 221 Goodbye. [ssam@centos ~]$ ftp 192.168.122.50 Connected to 192.168.122.50 (192.168.122.50). 220 (vsFTPd 2.2.2) Name (192.168.122.50:ssam): root 331 Please specify the password. Password: 530 Login incorrect. Login failed. ftp> bye 221 Goodbye. The output was inline with the pam settings as the user ssam was allowed and user root was blocked. Now I have tried the lftp command at the client to connect to the server for the user root. [ssam@centos ~]$ lftp [email protected] Password: lftp [email protected]:~> The login was indeed possible blocked user root as I got the lftp prompt. Now I tried to run a command. lftp [email protected]:~> ls: Login failed: 530 Login incorrect. Now it seems like the server has come to good senses and it tells me I was not able to do anything else. But the pluggable authentication module should have blocked the user root from entering the server at the first instance. Could anybody explain this for me?
IMO, it's a way how lftp client works. Check the log file /var/log/secure (on RHEL/CentOS) or similar for events like pam_listfile(vsftpd:auth): Refused user root for service vsftpd I verified such behavior by sniffing FTP traffic (RHEL 6.x, vsftpd 2.2.2, lftp 4.0.9). The FTP connection is not established till you enter some valid FTP command (aka ls) and not when you enter user name and its password.
"lftp" login still possible for "/etc/pam.d/vsftpd" blocked users
1,603,959,433,000
In case of TFTP, one can choose between "ASCII" or "octet" mode. As I have understood, 00001010(new-line) or 00001101(carriage return) bit streams in a binary file will get some sort of special treatment in ASCII mode? However, I created a file containing those characters: root@A58:~# printf '\n' | xxd | xxd -r > bfile; printf '\r' | xxd | xxd -r >> bfile; printf 'A' | xxd | xxd -r >> bfile root@A58:~# xxd -b bfile 0000000: 00001010 00001101 01000001 ..A root@A58:~# ..and when I uploaded this file from TFTP client to TFTP server using both "octet" and "netascii" modes, the file reached the TFTP server in same condition and had exactly the same content: T42 ~ # cmp /srv/tftp/reverse_ascii /srv/tftp/reverse_binary T42 ~ # Did I do something wrong? Or how should ASCII mode mangle the binary data?
TFTP uses similar mechanisms as telnet for transmitting ASCII. It follows the rules set out in the NVT specification. So effectively end-of-line markers are terminated with <cr><lf>, and if you want to send an actual <cr> then this is translated to <cr><nul>. hexdumping a file I created: 00000000 0d 54 65 73 74 69 6e 67 33 0d 0a |.Testing3..| However on transmission over tftp (got from a tcpdump -X): 0x0020: 0d00 5465 7374 696e 6733 0d00 0d0a ..Testing3.... Note how the <cr><lf> sequence has been converted to <cr><nul><cr><lf>. When I diff the results of the local and remote file, I end up with the same file. This will be because the <cr><nul> sequence will be returned back to <cr> and the local format (under unix) for a newline is <lf> and so the <cr><lf> is turned into <lf> and the original file as transmitted is received. I'm not so sure on how a DOS tftp server would handle the <cr><nul><cr><lf> sequence, I have a feeling it may mangle the output to have an extra <cr> (<cr><nul> becomes <cr> and <cr><lf> becomes <cr><lf>), especially given the RFC states: A host which receives netascii mode data must translate the data to its own format. References TFTP RFC, RFC1350 and the Telnet NVT Specification.
Carriage Return and Line Feed bit streams in a binary file during TFTP upload in ASCII mode
1,603,959,433,000
I have two dedicated servers: one has CentOS, the other Ubuntu. I installed backup manager on both servers. But I have two different behaviors. On CentOS: The backup of my files is "every night". I wish "every week." The backup of my DB is "every night" (that's good). the action is executed twice each night (once at 2 am, once at 4am ... why??) Backup Manager for erase on my FTP files older than 5 days ... So, this morning, my backup server was completely full. On Ubuntu: The backup does not start automatically. Why? When I start it manually, it seems to work but I get an error. In fact, I use a PHP script that fires when Backup Manager has completed. My script detects file integrity with md5 checksum. But there seems to be an error. Here is my backup-manager.conf on CentOS Here is my post-backup script for CentOS It does not use the checksum It works However, I do not find the hostname: why?? The mail is sent like this: Ns390769.ovh.net-sacdunjour.20111115.sql.gz (0.08 GB) nns390769.ovh.net-www.20111115.tar.gz (0.01 GB) nns390769.ovh.net-www.20111115.master.tar.gz (1.65 GB) nns390769.ovh.net-20111115.md5 (0 GB) nns390769.ovh.net-itbag_prestashop.20111115.sql.gz (0 GB) nTotal: 1.74 Go Here is my backup-manager.conf on ubuntu Here is my post-backup script on ubuntu Use of checksum It does not find me the file size I find the hostname (not CentOS) The mail is sent like this Files locally: Ns384990.ovh.net-20111108.md5 (0 B) Ns384990.ovh.net-all-mysql-databases.20111108.sql.bz2 (0 B) Ns384990.ovh.netwww.20111108.master.tar.gz (0 B) TOTAL: 0 B Integrity problem in the files sent to the FTP server` It's a real headache for me. Can you help me? Sorry for the length of my message, but I wanted to give you much information.
I just re-installed and everything works. This is clearer.
Backup Manager and Cron : CentOs and Ubuntu 11.10
1,603,959,433,000
I'm using the squid proxy on CentOS but, I can't connect to FTP sites from WAN. I did open FTP ports in the firewall on CentOS. However, I receive a "Page cannot be displayed" error when I try to connect to FTP sites. What should I do?
Add the below 3 lines to squid.conf, and reload squid. Should work for ftp upload and download via squid. acl SSL_ports port 443 21 acl ftp proto FTP http_access allow ftp Visit for usefull squid tutorial
Connecting to FTP sites via squid
1,603,959,433,000
I set up SFTP, but I have a problem regarding iptables. Here are my rules: A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 21 -j ACCEPT A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 20 -j ACCEPT A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 21 -j ACCEPT By the way, I'm using the vsftpd service for FTP. So, it's not connecting to FTP, before stopping iptables service. When I attempt to connect to FTP, that's gives me "Listing remote folder failed" error. What should I do?
SFTP and FTP are not the same thing. If you really mean SFTP, that's an SSH-based transmission that takes place only over port 22 (unless you configure your SSH daemon to listen on another port). FTP is an ancient file-transfer protocol that operates on ports and 21 (and possibly others). Firewalls need to do connection state-tracking to properly support it. Make sure you have the netfilter ftp connection tracking module (nf_conntrack_ftp) loaded. You can configure the min/max ports for "passive mode" FTP in vsftp via the pasv_min_port and pasv_max_port options in vsftpd.conf: Could you narrow the range down to a small number of ports (perhaps one port) and open them in the firewall? Are you sure the firewall is at fault? If you disable it temporarily, do things work?
Why am I getting a "Listing remote folder failed" error when I try to connect with SFTP?
1,603,959,433,000
I want to mount a remote FTP directory in linux. One of the main tools I found is curlftpfs. However, when I mount the remote FTP directory and try to read its contents, I get the following error: $ curlftpfs -o allow_other ftp://test.rebex.net testftp $ ls testftp ls: reading directory 'testftp': Input/output error test.rebex.net is a publicly available FTP server. I also tried with several other servers and a private one, and got the same error. Using root, options like ssl, ftp_port=-, fully disabling firewall, running on a fresh new arch/manjaro system didn't help. Still, I can connect to those servers using filezilla or ftp tool. The output when running with debug (-d) option (full output, not redacted or truncated): $ curlftpfs -d ftp://test.rebex.net testftp FUSE library version: 2.9.9 nullpath_ok: 0 nopath: 0 utime_omit_ok: 0 unique: 2, opcode: INIT (26), nodeid: 0, insize: 104, pid: 0 INIT: 7.39 flags=0x73fffffb max_readahead=0x00020000 INIT: 7.19 flags=0x00000011 max_readahead=0x00020000 max_write=0x00020000 max_background=0 congestion_threshold=0 unique: 2, success, outsize: 40 curlftpfs version used is 0.9.2 What could be the problem?
It seems like curlftpfs does not support the newer standartized FTP command MLSD used to retrieve the directory list. Some servers (like the test.rebex.net) most likely do not support the older (and not standartized) LIST -a command. There are forks of curlftpfs supporting MLSD command (like this), but I have not tried them. Instead, I found an alternative for myself, rclone. It supports FUSE and FTP, and uses MLSD command.
Reading mounted curlftpfs directory results in "Input/output error"
1,549,997,873,000
I have CCTV server that is backed up daily to a remote ftp server using an lftp command. The CCTV server saves videos to a new folder each day, the backup runs once a day at 1am, so each backup only affects 2 folders. After 28 days the local copies are all deleted. The command I currently use is: mirror --reverse --use-cache --allow-chown --allow-suid --no-umask --verbose The file transfer rate when moving files has been consistent for 2 years, but the wait between folders is slowly increasing. This means that while the file transfer rate is 1Mb/s if you take the wall clock time that backing up 2GB takes each day, the average speed after 2 years is now down to 0.5Mb/s. Is anything in my command causing the process to bloat? --use-cache for example? Could I have the mirror command to run one thread per folder so that it can get on with loading folder contents while it's uploading other files?
After trying clearing the cache, and removing the --use-cache command entirely, my service provider admitted they were having "network issues" their end, which was causing ls commands to run very slowly. Fortunately detailed logs from my end were able to demonstrate their error, and now show things running at full speed. Keep your logs people!!
Why would an lftp mirror operation slow down over time?
1,549,997,873,000
I have a versions/ directory on a remote server to which I have only rsync daemon and ftp access. This directory contains a set of subdirectories, each named after the datetime of a deployed codebase (e.g., versions/20150101000000/, versions/20150102120000/, etc.). I need to automate deleting old versions, and my intuition is that an rsync with an empty source directory will be much faster than recursively iterating with ftp (I've verified I do not have ftp's SITE EXEC available to do a simple rm -rf versions/$version/). I am using an initial rsync --list-only command to parse the list of versions, sorting them (I've seen evidence that rsync doesn't always return the list in expected order), and then choosing to delete all but the most 3 recent versions. The command to empty a specific directory is straightforward: rsync -vr --delete `mktemp -d`/ rsync://user@host:123/module/versions/$version/ However, in expected rsync fashion, this will not delete the top-level directory itself. Is there a way with rsync to request that the directory being synced be removed in addition to its contents? Otherwise, I suspect it should be possible to run the sync against the parent versions/ directory with some combination of --exclude/--include/--filter arguments, but for the life of me I cannot figure it out. I've tried things like: rsync -vr --delete --include=/20150101000000/ rsync://user@host:123/module/versions/ with and without either slash, with and without an --exclude='*' listed before or after the --include arguments, as well as with --delete-excluded in the event that matching both the --include and --exclude arguments requires it. Nothing seems to be achieving my desired results. I could have already completed the task by doing the rsync to empty the contents, followed up with an ftp rmdir on the empty directory, but I am stubborn enough to know that this is possible with rsync alone and so will hope for a solution. Thanks!
Thanks to a friendly IRC contact, I've found a working solution to delete a specific directory using a single rsync command. It works by specifying an --include argument for the contents of the directory (--include '/20150101000000/**'), a separate --include for the directory itself (--include '/20150101000000/'), and the --exclude '*' to ignore all other contents of the top-level directory (in this case, versions/): rsync -vr --delete --include '/20150101000000/**' --include '/20150101000000/' --exclude '*' `mktemp -d`/ rsync://user@host:123/module/versions/ I then went digging into the man page in an attempt to fully understand the above, and came across this handy section: a trailing "dir_name/***" will match both the directory (as if "dir_name/" had been specified) and everything in the directory (as if "dir_name/**" had been specified). This behavior was added in version 2.6.7. This information allows us to simplify to a single --include argument: rsync -vr --delete --include '/20150101000000/***' --exclude '*' `mktemp -d`/ rsync://user@host:123/module/versions/
Delete destination directories with rsync (contents as well as parent)
1,549,997,873,000
I am having a script which pulls data from different servers whose details it reads from external file. It reads the files and the verbose output shows all the matching files but it fetches only one file from the remote host. Following is my script: while IFS=','; read region sdp ip1 ip2 ip3 user1 pass1 user2 pass2 user3 pass3 do in=/var/opt/fds/statistics/ out=/pmautomation/PM/RawFiles/Data/BHCA/$date/$region/$sdp/ file=\*"PSC-TrafficHandler_8.1_A_"\*"_System."$date\*".stat" mkdir -p /pmautomation/PM/RawFiles/Data/BHCA/$date/$region/$sdp/ ftp -in $ip1<<END_SCRIPT quote USER $user1 quote PASS $pass1 bin prompt off lcd /pmautomation/PM/RawFiles/Data/BHCA/$date/$region/$sdp/ cd /var/opt/fds/statistics/ binary mget *PSC-TrafficHandler_8.1_A_*_System.$date*.stat bye END_SCRIPT done < /root/SDP_BHC/bin/Credentials.csv Following is the output: IP: 10.XXX.XX.XX Interactive mode on. Local directory now /pmautomation/PM/RawFiles/Data/BHCA/20150802/EAST/WB_SDP49 mget WDSDP49B_PSC-TrafficHandler_8.1_A_2_System.20150802_0000.stat? mget WDSDP49B_PSC-TrafficHandler_8.1_A_2_System.20150802_0100.stat? mget WDSDP49B_PSC-TrafficHandler_8.1_A_2_System.20150802_0200.stat? mget WDSDP49B_PSC-TrafficHandler_8.1_A_2_System.20150802_0300.stat? mget WDSDP49B_PSC-TrafficHandler_8.1_A_2_System.20150802_0400.stat? mget WDSDP49B_PSC-TrafficHandler_8.1_A_2_System.20150802_0500.stat? mget WDSDP49B_PSC-TrafficHandler_8.1_A_2_System.20150802_0600.stat? mget WDSDP49B_PSC-TrafficHandler_8.1_A_2_System.20150802_0700.stat? mget WDSDP49B_PSC-TrafficHandler_8.1_A_2_System.20150802_0800.stat? mget WDSDP49B_PSC-TrafficHandler_8.1_A_2_System.20150802_0900.stat? mget WDSDP49B_PSC-TrafficHandler_8.1_A_2_System.20150802_1000.stat? mget WDSDP49B_PSC-TrafficHandler_8.1_A_2_System.20150802_1100.stat? mget WDSDP49B_PSC-TrafficHandler_8.1_A_2_System.20150802_1200.stat? mget WDSDP49B_PSC-TrafficHandler_8.1_A_2_System.20150802_1300.stat? mget WDSDP49B_PSC-TrafficHandler_8.1_A_2_System.20150802_1400.stat? mget WDSDP49B_PSC-TrafficHandler_8.1_A_2_System.20150802_1500.stat? mget WDSDP49B_PSC-TrafficHandler_8.1_A_2_System.20150802_1600.stat? mget WDSDP49B_PSC-TrafficHandler_8.1_A_2_System.20150802_1700.stat? mget WDSDP49B_PSC-TrafficHandler_8.1_A_2_System.20150802_1800.stat? mget WDSDP49B_PSC-TrafficHandler_8.1_A_2_System.20150802_1900.stat? mget WDSDP49B_PSC-TrafficHandler_8.1_A_2_System.20150802_2000.stat? mget WDSDP49B_PSC-TrafficHandler_8.1_A_2_System.20150802_2100.stat? mget WDSDP49B_PSC-TrafficHandler_8.1_A_2_System.20150802_2200.stat? mget WDSDP49B_PSC-TrafficHandler_8.1_A_2_System.20150802_2300.stat? Why my mget command is not able to get all the files and getting only one file from all the matching files?
I have got some pointers from Jeff. Somehow the prompt off was not working and i was getting prompt to get the matching files. I have tried putting the 'y' below the mget command and it worked. Following is the updated code: while IFS=','; read region sdp ip1 ip2 ip3 user1 pass1 user2 pass2 user3 pass3 do in=/var/opt/fds/statistics/ out=/pmautomation/PM/RawFiles/Data/BHCA/$date/$region/$sdp/ file=\*"PSC-TrafficHandler_8.1_A_"\*"_System."$date\*".stat" mkdir -p /pmautomation/PM/RawFiles/Data/BHCA/$date/$region/$sdp/ ftp -in $ip1<<END_SCRIPT quote USER $user1 quote PASS $pass1 bin prompt off lcd /pmautomation/PM/RawFiles/Data/BHCA/$date/$region/$sdp/ cd /var/opt/fds/statistics/ binary mget *PSC-TrafficHandler_8.1_A_*_System.$date*.stat y y y y bye END_SCRIPT done < /root/SDP_BHC/bin/Credentials.csv This is really strange behavior but it worked for me. Luckily i knew the number of files in remote host and putting the same number of 'y' made it work.
Why won't my FTP script get all the files using mget command?
1,549,997,873,000
I have connected 2 PCs using ethernet cable and set up FTP on one of them to transfer some >100GB of files. However, trying to download it, I run into a problem of speed not more than 50kB/s. It happens whether I download through Nautilus or Filezilla. However, if I try to download a large file using Google Chrome, it downloads at speed around 50MB/s, which is pretty good. But Chrome cannot download directories. What can be a solution to either speed up LAN or download a directory through Chrome? UPD: I tried to create a torrent and send it that way, but it's no better, stays around 100kB/s... UPD1: I changed the cable and it didn't change, also it stops completely, if I turn on WiFi parallel to the cable. UPD2: I found an advise to edit /etc/default/grub to disable IPv6, but it didn't help as well. A small detail: both sender and receiver file systems are NTFS, does it make a difference?
I found it. The problem was far not in the connection, it is in the file system. I was trying to copy from NTFS to NTFS. When I formatted the receiving FS to ext4, speed increased to 40+MB/sec.
Super-slow LAN speed, unless downloading through Chrome's FTP client [closed]
1,549,997,873,000
ftp -n ${FTP_HOST} << STOP user ${FTP_USERNAME} ${FTP_PASSWORD} binary lcd ${FTP_FROM_DIR} cd ${FTP_TO_DIR} put ${reportFileName} STOP That is my code which is not successfully copying the file to remote host but using it manually it successfully copies the file to remote host. When ran from a script, "(local-file) usage:put local-file remote-file" appears in the console. what could be the problem?
I suspect part of your issue is with the way you're constructed your heredoc. Try it like so: $ ftp -n ${FTP_HOST} << STOP user ${FTP_USERNAME} ${FTP_PASSWORD} binary lcd ${FTP_FROM_DIR} cd ${FTP_TO_DIR} put ${reportFileName} STOP If you truly want those spaces/tabs in the command then you'll need to change to this form of the heredoc: $ ftp -n ${FTP_HOST} <<-STOP user ${FTP_USERNAME} ${FTP_PASSWORD} binary lcd ${FTP_FROM_DIR} cd ${FTP_TO_DIR} put ${reportFileName} STOP Whenever you have spaces/tabs within your heredoc you need to make use of the <<- for to tell the shell to strip the leading spaces/tabs out prior to running the commands that are contained within. A sidebar on using echo When you're parsing variables within scripts and you innocently use echo to print messages you're typically getting more than just the string you're interested in. echo will also include a trailing newline character. You can see it here in this example: $ echo $(echo hi) | hexdump -C 00000000 68 69 0a |hi.| 00000003 The 0a is the ASCII code for a newline. So you'll typically want to disable this behavior by using the -n switch to echo: $ echo -n $(echo hi) | hexdump -C 00000000 68 69 |hi| 00000002 Or even better upgrade to using printf. References How to use ftp in a shell script
FTP "put" not copying file to remote host when ran from shell script but copies the file to remote host when ran manually
1,549,997,873,000
I had to transfer a few videos and photoes on a FTP server. For that, the person on the remote machine gave me the IP address of that machine where I had to transfer. He told me to transfer by FTP. And he provided me with username and password for the same. He was expecting that I have Windows OS here locally and so he told me how to install a FTP application on my local box. But actually I have Ubuntu13.04 here. Now, I opened a terminal and typed ftp and then the IP address of the remote box. I got connected without being prompted for the username and password. Then I transferred the files via FTP. Two days later, I ftped to that machine I was able to see the files that I had transferred. But, the remote user told me he didn't receive any! Till now I was thinking that it's his mistake that he hasn't checked properly his machine because I was 100 % sure that the files are in his system as I had confirmed earlier by doing FTP to that machine. It occurred to me today that he may also be right and the issue is that he is not seeing the files may be something related to the username and password that he had sent me. As I didn't use his username (the one he provided me) to log in via FTP, is that the reason that he was expecting the file to be uploaded at a particular location that is different than the location where I transferred? In other words, are the files uploaded on location depending on the username I have used, i.e., the location is dependent on the username? If so, then how to use ftp specifying a particular username?
It seems that the FTP server allows anonymous FTP: FTP where the username is conventionally anonymous and every password is accepted. Your FTP client tried an anonymous FTP login, and this succeeded. The files are in whatever directory is the default directory for anonymous users; this is determined by the FTP server configuration. Anonymous FTP is commonly used for public download sites. Allowing uploads for anonymous users is a lot less common and risks having the site used for malware distribution. You should recommend that the server owner disables anonymous FTP or at least disables uploads for anonymous users. Here are some ways to log in with your user name and password instead of trying to log in anonymously. Some ways don't work with all FTP clients. Create a .netrc file in your home directory containing the line machine server.example.com login ravi Run ftp [email protected] or ftp ftp://[email protected]/ to log in as user ravi. Run ftp -n option to disable automatic login for this session.
FTP transfer: I can see files on remote machine, but remote user can't
1,549,997,873,000
I am Trying to configure ProFTPd to change group for newly created files/directories. In my config I have this: <Directory /home/*> GroupOwner www </Directory> Which does not seem to work. All users are added to www group. Debug shows nothing regarding to a group change. I'm using FREEBSD 9.0-release. EDIT: I'm willing to try any other FTP server that makes this easier.
After a deep research, I found out that proftpd is not capable of changing a group of newly uploaded file.. However workaround was found: You need to simply change group for the user's home folder, after which all newly uploaded files will inherit the group from the home folder. Not much of a solution, but at least something. =)
proftpd does not change default group for new files
1,549,997,873,000
I'm setting up a proFTPd server so I can upload files to my webserver, but I've never tried this before. I've installed proftpd, added a user with a home folder: /home/FTP-shared and added /bin/false shell to it as well. But what do I do configuration-wise now in proftp to be able to login with this user, and up and download, delete and so on? And my idea was to symlink to Apache www folder from the ftp user directory? Will that work?
For your first question, you can read it here. For your second question, I'm currently using mount --bind.
Creating a proFTPd user
1,549,997,873,000
I have an FTP server (Debian) setup where users send JPG images. I need a process running on the background that every time a picture is sent via FTP a bash script is executed for generating thumbnail files for each image uploaded. I already have the script that generates the thumbnail, the problem is it needs to be executed manually, but what I want is the script to be executed each time an image file is uploaded. How can this work?
Consider using inotifywait eg watch a directory inotifywait . Then create a file in that direectory. Here's a previous answer from Unix/Linux stackexchange
Running a bash script each time a file is uploaded? [duplicate]
1,549,997,873,000
I use a script for work that invokes lftp to mirror a directory: #!/bin/bash HOST='ftp.example.com" USER='pretenduser' PASS='pretendpass' TARGETFOLDER='/home/pretenduser/Dropbox/lftp' SOURCEFOLDER='/files/Inbox' LOG='/home/pretenduser/Scripts/lftp.log' lftp -c " set ftp:ssl-allow no open $HOST user $USER $PASS mirror --verbose --delete $SOURCEFOLDER $TARGETFOLDER bye " >> $LOG lftp is not writing to $LOG, it creates the file but it's empty. I have also tried 2> and 1> instead of >>. What am I doing wrong?
As Gilles commented, your redirection is on a separate line, which means it's a separate (empty) command. The lftp command ended with the ending double-quote. Simply change the lftp command to: lftp -c " set ftp:ssl-allow no open $HOST user $USER $PASS mirror --verbose --delete $SOURCEFOLDER $TARGETFOLDER bye " >> $LOG
Simple question: lftp not writing to $LOG --- what am I doing wrong? [closed]
1,549,997,873,000
After reading through this tutorial I still have a persistent question. In the beggining of the article the writer says: Warning: FTP is inherently insecure! Consider using SFTP instead of FTP. I am assuming that he might mean FTPS (as I think that is what his article explains but I am not sure). However, at the bottom of the article, which is all about how to use vsftpd over SSL/TLS he shows an image that looks like this: Where you can quite clearly see that the Enctryption is to "Require expliticity FTP over TLS". So, is this any different than using FTPS and if it is, what is the difference?
SFTP and FTP are, in fact, different protocols. SFTP is actually built on top of SSH, the Secure SHell protocol, while FTP-over-SSL (aka FTPS) is simply vanilla FTP over an encrypted transport-layer connection, the same as HTTPS IS HTTP over an encrypted connection. If I'm not mistaken, it would be possible for a plain-FTP client to connect through a ssl proxy to an FTPS-enabled server, or for an FTPS client to connect to a plain FTP server hiding behind a ssl proxy server. The same does not hold for SFTP; it must be implemented at both endpoints. The relative merits I leave for others to discuss, but (as I understand it) SSH/SFTP's handling of credentials is much simpler on small networks.
When FTP Requires FTP over TLS is it FTPS?
1,317,280,996,000
I have a minimal install of Centos in VirtualBox. I want to run a ftp service to share files between the host and my VM, and then learn about ftp servers. I installed vsftpd and changed the vsftpd.conf file as bellow: anonymous_enable=NO local_enable=YES write_enable=YES local_umask=O22 dirmessage_enable=YES xferlog_enable=YES connect_from_port 2O=YES xferlog_std format=YES chroot_local_user=YES listen_ipv6=YES pam_service_name=vsftpd userlist_enable=YES tcp_wrappers=YES But, when I type service vsftpd start I get the following error: Job for vsftpd.service failed because the control process exited with error code. See "systemctl status vsftpd.service" and journalctl -xe" for details. Is this issue happening due of a wrong config as showed above, or is it something else? What can I do to start my ftp server? Thanks! EDIT Output if systemctl status -l vsftpd.service [user@localhost vsftpd]$ systemctl status -l vsftpd.service vsftpd.service - Vsftpd ftp daemon Loaded: loaded (/usr/lib/systemd/system/vsftpd.service, disabled; vendor preset: disabled) Active: failed (Result: exit-code) since Ter 2017-05-09 21:03:19 -03; 3min 2s ago Process: 3047 ExecStart=/usr/sbin/vsftpd /etc/vsftpd/vsftpd.conf (code=exited status=2) Mai 09 21:03:19 localhost.localdomain systemd[1]: Starting Vsftpd ftp daemon... Mai 09 21:03:19 localhost.localdomain systemd[1]: vsftpd.service: control process exited, code=exited status=2 Mai 09 21:03:19 localhost.localdomain systemd[1]: Failed to start Vsftpd ftp daemon. Mai 09 21:03:19 localhost.localdomain systemd[1]: Unit vsftpd.service entered failed state. Mai 09 21:03:19 localhost.localdomain systemd[1]: vsftpd.service failed. [user@localhost vsftpd]$
You have a space between xferlog_std and format=YES according to the configuration you supplied. Also, you might wish to compare with a working configuration: $ sudo cat /etc/vsftpd/vsftpd.conf | grep -v "#" anonymous_enable=YES local_enable=YES write_enable=YES local_umask=022 anon_upload_enable=YES anon_mkdir_write_enable=YES dirmessage_enable=YES xferlog_enable=YES connect_from_port_20=YES chown_uploads=YES chown_username=abdullah xferlog_std_format=YES chroot_local_user=YES listen=NO listen_ipv6=YES pam_service_name=vsftpd userlist_enable=YES tcp_wrappers=YES Source: Not able to upload as anonymous user in vsftpd
Centos: VSFTPD not Starting
1,317,280,996,000
The download accelerators I've found thus far only let me specify a single file to download. This is useful for single large files, but I'm looking for a tool that lets me hand over a list of multiple files to download simultaneously. In some cases I would want to download the entire list in the traditional accelerated fashion, but sometimes I wouldn't want acceleration - for example, when I was fetching script-generated content. EDIT: lftp is quite a good program (!), but has no way to show live information about all running downloads, only one (via wait all). I can write a script which watches lftp's file descriptors and displays statistics off of that, but that's decidedly inelegant... and thinking about it, I don't know how big the downloaded file is (since I have no access to the lftp process) so I can't calculate percentages anyway. :(
lftp can do that. You've got: pget to download a single file with several connections mirror -P 4 to download a tree with up to four connections and you can put any get in background to start another one with get file &  (also Ctrl-Z to put a download in background when using it interactively). You can set the number of connections per site with: set net:connection-limit 6 Use the jobs command to see the status of the download(s). lftp supports a number of protocols including HTTP, FTP, SFTP and is scriptable (#! /usr/bin/lftp -f, or lftp -c commands).
Console download accelerator that downloads *multiple* files simultaneously
1,317,280,996,000
today I wrote a script which should copy all files per ftp to my destination server. Script is executed inside zsh-shell, so I guess I should use "sh script.sh" to execute it correctly? Also I pasted my script to https://www.shellcheck.net/, which says its fine, but inside my zsh-shell I get following errors: upload_file: command not found #!/bin/bash FTP_SERVER=ftp.ip FTP_USER=ftp.user FTP_PASS=ftp.passwd FTP_DESTINATION_DIR="/destination_directory" LOCAL_SOURCE_DIR="/home/$USER/pictures/local_pictures" function upload_file { local file=$1 echo "Hochladen von Datei $file" ftp -inv "$FTP_SERVER" << EOF user $FTP_USER $FTP_PASS binary cd $FTP_DESTINATION_DIR put $file quit EOF } find "$LOCAL_SOURCE_DIR" \( -name "*.jpg" -o -name "*.png" \) -exec bash -c 'upload_file "$0"' {} \; I am not very used to this kind of scripting...
The immediate problem is that the function you declare is not visible to the second bash instance you run from find ... -exec. https://shellcheck.net/ probably just sees a single-quoted literal string, and does not look inside it for problems. Using sh to run a Bash script could also be a problem. Your script doesn't use any Bash features other than the nonstandard function declaration, but there is really no reason to prefer that over the standard POSIX function declaration syntax. (If sh is a symlink to Bash on your system, it probably works even though many Bash-only features will be turned off when it is invoked as sh. But you shoud still understand the difference, and use bash to run Bash scripts.) Anyway, the preferred mechanism is to chmod +x the script, and then simply run it by specifying its path (or just its name, if the directory it's in is on your PATH). This allows the shebang to select the correct interpreter. (It's simply ignored as a comment if you explicitly run the script with sh or bash, or even something completely wrong like python or csh). With these details fixed, your script would look like this. #!/bin/bash FTP_SERVER=ftp.ip FTP_USER=ftp.user FTP_PASS=ftp.passwd FTP_DESTINATION_DIR="/destination_directory" LOCAL_SOURCE_DIR="/home/$USER/pictures/local_pictures" upload_file () { local file=$1 echo "Hochladen von Datei $file" ftp -inv "$FTP_SERVER" <<EOF user $FTP_USER $FTP_PASS binary cd $FTP_DESTINATION_DIR put $file quit EOF } export -f upload_file find "$LOCAL_SOURCE_DIR" \( -name "*.jpg" -o -name "*.png" \) -exec bash -c 'upload_file "$0"' {} \; A better fix still would be to allow upload_file to accept multiple file names, and only create one FTP session. : upload_file () { ( printf '%s\n' \ "user $FTP_USER $FTP_PASS" \ "binary" \ "cd $FTP_DESTINATION_DIR" local file for file; do # print diagnostics to stderr echo "Hochladen von Datei $file" >&2 echo "put $file" done echo "quit" ) | ftp -inv "$FTP_SERVER" } export -f upload_file find "$LOCAL_SOURCE_DIR" \( -name "*.jpg" -o -name "*.png" \) \ -exec bash -c 'upload_file "$@"' _ {} + Tangentially, probably don't use an .sh extension on your shell scripts, especially if they are not actually sh scripts. There is no strong convention around this, but ls doesn't have an extension, either; why should it?
Shellscript which copies all files automatically per ftp
1,317,280,996,000
I am trying to set up an FTP server on one of my devices that runs DietPi and I selected proFTPD as a server. I have installed the software and followed some set-up information I found here. But then I noticed that the service was not running. After trying to find it in via ps aux | grep proftpd I did not succeed. After issuing: systemclt status proftpd.service I got the following: ● proftpd.service - LSB: Starts ProFTPD daemon Loaded: loaded (/etc/init.d/proftpd; generated) Active: failed (Result: exit-code) since Tue 2021-04-13 22:58:49 BST; 9s ago Docs: man:systemd-sysv-generator(8) Process: 26998 ExecStart=/etc/init.d/proftpd start (code=exited, status=1/FAILURE) Apr 13 22:58:48 DietPi systemd[1]: Starting LSB: Starts ProFTPD daemon... Apr 13 22:58:49 DietPi proftpd[26998]: Starting ftp server: proftpd2021-04-13 22:58:49,163 DietPi proftpd[2700 5]: mod_ctrls/0.9.5: error: unable to bind to local socket: Address already in use Apr 13 22:58:49 DietPi proftpd[26998]: 2021-04-13 22:58:49,242 DietPi proftpd[27005]: error: unable to stat() /var/log/proftpd: No such file or directory Apr 13 22:58:49 DietPi proftpd[26998]: 2021-04-13 22:58:49,244 DietPi proftpd[27005]: mod_ctrls/0.9.5: unable to open ControlsLog '/var/log/proftpd/controls.log': No such file or directory Apr 13 22:58:49 DietPi proftpd[26998]: 2021-04-13 22:58:49,246 DietPi proftpd[27005]: fatal: ControlsLog: unab le to open '/var/log/proftpd/controls.log': No such file or directory on line 68 of '/etc/proftpd/proftpd.conf' Apr 13 22:58:49 DietPi proftpd[26998]: failed! Apr 13 22:58:49 DietPi systemd[1]: proftpd.service: Control process exited, code=exited, s tatus=1/FAILURE Apr 13 22:58:49 DietPi systemd[1]: proftpd.service: Failed with result 'exit-code'. Apr 13 22:58:49 DietPi systemd[1]: Failed to start LSB: Starts ProFTPD daemon. So I dug a little bit here and turns out that no other process runs or binds on port 21. So, what could be the issue of the service failing here? Furthermore, by issuing sudo lsof -i tcp:21 I do not get any response. Also, via nmap I get the following: PORT STATE SERVICE 22/tcp open ssh 53/tcp open domain 80/tcp open http No 21/tcp port here. Debug via proftpd -nd10 on the cl: roftpd -nd10 2021-04-14 08:13:45,498 DietPi proftpd[951]: using PCRE 8.39 2016-06-14 2021-04-14 08:13:45,508 DietPi proftpd[951]: using TCP receive buffer size of 131072 bytes 2021-04-14 08:13:45,510 DietPi proftpd[951]: using TCP send buffer size of 16384 bytes 2021-04-14 08:13:45,513 DietPi proftpd[951]: testing Unix domain socket using S_ISFIFO 2021-04-14 08:13:45,517 DietPi proftpd[951]: testing Unix domain socket using S_ISSOCK 2021-04-14 08:13:45,519 DietPi proftpd[951]: using S_ISSOCK macro for Unix domain socket detection 2021-04-14 08:13:45,528 DietPi proftpd[951]: mod_ctrls/0.9.5: error: unable to bind to local socket: Address already in use 2021-04-14 08:13:45,532 DietPi proftpd[951]: using 'UTF-8' as local charset for UTF-8 conversion 2021-04-14 08:13:45,535 DietPi proftpd[951]: ROOT PRIVS at mod_core.c:376 2021-04-14 08:13:45,537 DietPi proftpd[951]: RELINQUISH PRIVS at mod_core.c:378 2021-04-14 08:13:45,541 DietPi proftpd[951]: ROOT PRIVS at mod_core.c:385 2021-04-14 08:13:45,544 DietPi proftpd[951]: ROOT PRIVS at parser.c:1187 2021-04-14 08:13:45,549 DietPi proftpd[951]: mod_dso/0.5: loading 'mod_ctrls_admin.c' 2021-04-14 08:13:45,554 DietPi proftpd[951]: mod_dso/0.5: loaded module 'mod_ctrls_admin' (from '/usr/lib/proftpd/mod_ctrls_admin.so', last modified on Tue Mar 10 23:03:08 2020) 2021-04-14 08:13:45,558 DietPi proftpd[951]: mod_dso/0.5: loading 'mod_tls.c' 2021-04-14 08:13:45,562 DietPi proftpd[951]: mod_dso/0.5: loaded module 'mod_tls' (from '/usr/lib/proftpd/mod_tls.so', last modified on Tue Mar 10 23:03:08 2020) 2021-04-14 08:13:45,565 DietPi proftpd[951]: mod_tls/2.7: using OpenSSL 1.1.1d 10 Sep 2019 2021-04-14 08:13:45,587 DietPi proftpd[951]: mod_dso/0.5: loading 'mod_radius.c' 2021-04-14 08:13:45,591 DietPi proftpd[951]: mod_dso/0.5: loaded module 'mod_radius' (from '/usr/lib/proftpd/mod_radius.so', last modified on Tue Mar 10 23:03:08 2020) 2021-04-14 08:13:45,594 DietPi proftpd[951]: mod_dso/0.5: loading 'mod_quotatab.c' 2021-04-14 08:13:45,599 DietPi proftpd[951]: mod_dso/0.5: loaded module 'mod_quotatab' (from '/usr/lib/proftpd/mod_quotatab.so', last modified on Tue Mar 10 23:03:08 2020) 2021-04-14 08:13:45,602 DietPi proftpd[951]: mod_dso/0.5: loading 'mod_quotatab_file.c' 2021-04-14 08:13:45,607 DietPi proftpd[951]: mod_dso/0.5: loaded module 'mod_quotatab_file' (from '/usr/lib/proftpd/mod_quotatab_file.so', last modified on Tue Mar 10 23:03:08 2020) 2021-04-14 08:13:45,609 DietPi proftpd[951]: mod_dso/0.5: loading 'mod_quotatab_radius.c' 2021-04-14 08:13:45,612 DietPi proftpd[951]: mod_dso/0.5: loaded module 'mod_quotatab_radius' (from '/usr/lib/proftpd/mod_quotatab_radius.so', last modified on Tue Mar 10 23:03:08 2020) 2021-04-14 08:13:45,617 DietPi proftpd[951]: mod_dso/0.5: loading 'mod_wrap.c' 2021-04-14 08:13:45,625 DietPi proftpd[951]: mod_dso/0.5: loaded module 'mod_wrap' (from '/usr/lib/proftpd/mod_wrap.so', last modified on Tue Mar 10 23:03:08 2020) 2021-04-14 08:13:45,628 DietPi proftpd[951]: mod_dso/0.5: loading 'mod_rewrite.c' 2021-04-14 08:13:45,633 DietPi proftpd[951]: mod_dso/0.5: loaded module 'mod_rewrite' (from '/usr/lib/proftpd/mod_rewrite.so', last modified on Tue Mar 10 23:03:08 2020) 2021-04-14 08:13:45,636 DietPi proftpd[951]: mod_dso/0.5: loading 'mod_load.c' 2021-04-14 08:13:45,639 DietPi proftpd[951]: mod_dso/0.5: loaded module 'mod_load' (from '/usr/lib/proftpd/mod_load.so', last modified on Tue Mar 10 23:03:08 2020) 2021-04-14 08:13:45,643 DietPi proftpd[951]: mod_dso/0.5: loading 'mod_ban.c' 2021-04-14 08:13:45,648 DietPi proftpd[951]: mod_dso/0.5: loaded module 'mod_ban' (from '/usr/lib/proftpd/mod_ban.so', last modified on Tue Mar 10 23:03:08 2020) 2021-04-14 08:13:45,651 DietPi proftpd[951]: mod_dso/0.5: loading 'mod_wrap2.c' 2021-04-14 08:13:45,656 DietPi proftpd[951]: mod_dso/0.5: loaded module 'mod_wrap2' (from '/usr/lib/proftpd/mod_wrap2.so', last modified on Tue Mar 10 23:03:08 2020) 2021-04-14 08:13:45,660 DietPi proftpd[951]: mod_dso/0.5: loading 'mod_wrap2_file.c' 2021-04-14 08:13:45,664 DietPi proftpd[951]: mod_dso/0.5: loaded module 'mod_wrap2_file' (from '/usr/lib/proftpd/mod_wrap2_file.so', last modified on Tue Mar 10 23:03:08 2020) 2021-04-14 08:13:45,668 DietPi proftpd[951]: mod_dso/0.5: loading 'mod_dynmasq.c' 2021-04-14 08:13:45,673 DietPi proftpd[951]: mod_dso/0.5: loaded module 'mod_dynmasq' (from '/usr/lib/proftpd/mod_dynmasq.so', last modified on Tue Mar 10 23:03:08 2020) 2021-04-14 08:13:45,675 DietPi proftpd[951]: mod_dso/0.5: loading 'mod_exec.c' 2021-04-14 08:13:45,681 DietPi proftpd[951]: mod_dso/0.5: loaded module 'mod_exec' (from '/usr/lib/proftpd/mod_exec.so', last modified on Tue Mar 10 23:03:08 2020) 2021-04-14 08:13:45,683 DietPi proftpd[951]: mod_dso/0.5: loading 'mod_shaper.c' 2021-04-14 08:13:45,688 DietPi proftpd[951]: mod_dso/0.5: loaded module 'mod_shaper' (from '/usr/lib/proftpd/mod_shaper.so', last modified on Tue Mar 10 23:03:08 2020) 2021-04-14 08:13:45,692 DietPi proftpd[951]: mod_dso/0.5: loading 'mod_ratio.c' 2021-04-14 08:13:45,696 DietPi proftpd[951]: mod_dso/0.5: loaded module 'mod_ratio' (from '/usr/lib/proftpd/mod_ratio.so', last modified on Tue Mar 10 23:03:08 2020) 2021-04-14 08:13:45,699 DietPi proftpd[951]: mod_dso/0.5: loading 'mod_site_misc.c' 2021-04-14 08:13:45,704 DietPi proftpd[951]: mod_dso/0.5: loaded module 'mod_site_misc' (from '/usr/lib/proftpd/mod_site_misc.so', last modified on Tue Mar 10 23:03:08 2020) 2021-04-14 08:13:45,706 DietPi proftpd[951]: mod_dso/0.5: loading 'mod_sftp.c' 2021-04-14 08:13:45,722 DietPi proftpd[951]: mod_dso/0.5: loaded module 'mod_sftp' (from '/usr/lib/proftpd/mod_sftp.so', last modified on Tue Mar 10 23:03:08 2020) 2021-04-14 08:13:45,725 DietPi proftpd[951]: mod_sftp/1.0.0: using OpenSSL 1.1.1d 10 Sep 2019 2021-04-14 08:13:45,737 DietPi proftpd[951]: mod_dso/0.5: loading 'mod_sftp_pam.c' 2021-04-14 08:13:45,741 DietPi proftpd[951]: mod_dso/0.5: loaded module 'mod_sftp_pam' (from '/usr/lib/proftpd/mod_sftp_pam.so', last modified on Tue Mar 10 23:03:08 2020) 2021-04-14 08:13:45,744 DietPi proftpd[951]: mod_dso/0.5: loading 'mod_facl.c' 2021-04-14 08:13:45,749 DietPi proftpd[951]: mod_dso/0.5: loaded module 'mod_facl' (from '/usr/lib/proftpd/mod_facl.so', last modified on Tue Mar 10 23:03:08 2020) 2021-04-14 08:13:45,752 DietPi proftpd[951]: mod_dso/0.5: loading 'mod_unique_id.c' 2021-04-14 08:13:45,757 DietPi proftpd[951]: mod_dso/0.5: loaded module 'mod_unique_id' (from '/usr/lib/proftpd/mod_unique_id.so', last modified on Tue Mar 10 23:03:08 2020) 2021-04-14 08:13:45,762 DietPi proftpd[951]: mod_dso/0.5: loading 'mod_copy.c' 2021-04-14 08:13:45,768 DietPi proftpd[951]: mod_dso/0.5: loaded module 'mod_copy' (from '/usr/lib/proftpd/mod_copy.so', last modified on Tue Mar 10 23:03:08 2020) 2021-04-14 08:13:45,773 DietPi proftpd[951]: mod_dso/0.5: loading 'mod_deflate.c' 2021-04-14 08:13:45,787 DietPi proftpd[951]: mod_dso/0.5: loaded module 'mod_deflate' (from '/usr/lib/proftpd/mod_deflate.so', last modified on Tue Mar 10 23:03:08 2020) 2021-04-14 08:13:45,789 DietPi proftpd[951]: mod_deflate/0.5.7: using zlib 1.2.11 2021-04-14 08:13:45,792 DietPi proftpd[951]: mod_dso/0.5: loading 'mod_ifversion.c' 2021-04-14 08:13:45,798 DietPi proftpd[951]: mod_dso/0.5: loaded module 'mod_ifversion' (from '/usr/lib/proftpd/mod_ifversion.so', last modified on Tue Mar 10 23:03:08 2020) 2021-04-14 08:13:45,800 DietPi proftpd[951]: mod_dso/0.5: loading 'mod_memcache.c' 2021-04-14 08:13:45,805 DietPi proftpd[951]: mod_dso/0.5: loaded module 'mod_memcache' (from '/usr/lib/proftpd/mod_memcache.so', last modified on Tue Mar 10 23:03:08 2020) 2021-04-14 08:13:45,809 DietPi proftpd[951]: mod_memcache/0.1: using libmemcached-1.0.18 2021-04-14 08:13:45,812 DietPi proftpd[951]: mod_dso/0.5: loading 'mod_tls_memcache.c' 2021-04-14 08:13:45,815 DietPi proftpd[951]: mod_dso/0.5: loaded module 'mod_tls_memcache' (from '/usr/lib/proftpd/mod_tls_memcache.so', last modified on Tue Mar 10 23:03:08 2020) 2021-04-14 08:13:45,815 DietPi proftpd[951]: mod_dso/0.5: loading 'mod_readme.c' 2021-04-14 08:13:45,823 DietPi proftpd[951]: mod_dso/0.5: loaded module 'mod_readme' (from '/usr/lib/proftpd/mod_readme.so', last modified on Tue Mar 10 23:03:08 2020) 2021-04-14 08:13:45,825 DietPi proftpd[951]: mod_dso/0.5: loading 'mod_ifsession.c' 2021-04-14 08:13:45,831 DietPi proftpd[951]: mod_dso/0.5: loaded module 'mod_ifsession' (from '/usr/lib/proftpd/mod_ifsession.so', last modified on Tue Mar 10 23:03:08 2020) 2021-04-14 08:13:45,835 DietPi proftpd[951]: RELINQUISH PRIVS at parser.c:1190 2021-04-14 08:13:45,838 DietPi proftpd[951]: RELINQUISH PRIVS at mod_core.c:388 2021-04-14 08:13:45,844 DietPi proftpd[951]: DenyFilter: compiling regex '\*.*/' 2021-04-14 08:13:45,857 DietPi proftpd[951]: retrieved UID 1000 for user 'dietpi' 2021-04-14 08:13:45,862 DietPi proftpd[951]: retrieved GID 1000 for group 'dietpi' 2021-04-14 08:13:45,866 DietPi proftpd[951]: <IfModule>: using 'mod_quotatab.c' section at line 53 2021-04-14 08:13:45,868 DietPi proftpd[951]: <IfModule>: using 'mod_ratio.c' section at line 57 2021-04-14 08:13:45,871 DietPi proftpd[951]: <IfModule>: using 'mod_delay.c' section at line 61 2021-04-14 08:13:45,873 DietPi proftpd[951]: <IfModule>: using 'mod_ctrls.c' section at line 65 2021-04-14 08:13:45,874 DietPi proftpd[951]: ROOT PRIVS at mod_ctrls.c:114 2021-04-14 08:13:45,877 DietPi proftpd[951]: RELINQUISH PRIVS at mod_ctrls.c:117 2021-04-14 08:13:45,878 DietPi proftpd[951]: <IfModule>: using 'mod_ctrls_admin.c' section at line 73 2021-04-14 08:13:45,879 DietPi proftpd[951]: ROOT PRIVS at mod_core.c:376 2021-04-14 08:13:45,879 DietPi proftpd[951]: RELINQUISH PRIVS at mod_core.c:378 2021-04-14 08:13:45,879 DietPi proftpd[951]: ROOT PRIVS at mod_core.c:385 2021-04-14 08:13:45,879 DietPi proftpd[951]: processing configuration directory '/etc/proftpd/conf.d/' 2021-04-14 08:13:45,880 DietPi proftpd[951]: RELINQUISH PRIVS at mod_core.c:388 2021-04-14 08:13:45,907 DietPi proftpd[951]: UseReverseDNS off, returning IP address instead of DNS name 2021-04-14 08:13:45,907 DietPi proftpd[951] 127.0.0.1: 2021-04-14 08:13:45,907 DietPi proftpd[951] 127.0.0.1: Config for DietPi FTP: 2021-04-14 08:13:45,908 DietPi proftpd[951] 127.0.0.1: IdentLookups 2021-04-14 08:13:45,908 DietPi proftpd[951] 127.0.0.1: DeferWelcome 2021-04-14 08:13:45,908 DietPi proftpd[951] 127.0.0.1: MultilineRFC2228 2021-04-14 08:13:45,908 DietPi proftpd[951] 127.0.0.1: DefaultServer 2021-04-14 08:13:45,908 DietPi proftpd[951] 127.0.0.1: ShowSymlinks 2021-04-14 08:13:45,908 DietPi proftpd[951] 127.0.0.1: AllowRetrieveRestart 2021-04-14 08:13:45,908 DietPi proftpd[951] 127.0.0.1: AllowStoreRestart 2021-04-14 08:13:45,909 DietPi proftpd[951] 127.0.0.1: TimeoutNoTransfer 2021-04-14 08:13:45,909 DietPi proftpd[951] 127.0.0.1: TimeoutStalled 2021-04-14 08:13:45,909 DietPi proftpd[951] 127.0.0.1: TimeoutIdle 2021-04-14 08:13:45,909 DietPi proftpd[951] 127.0.0.1: DisplayLogin 2021-04-14 08:13:45,909 DietPi proftpd[951] 127.0.0.1: DisplayChdir 2021-04-14 08:13:45,909 DietPi proftpd[951] 127.0.0.1: ListOptions 2021-04-14 08:13:45,909 DietPi proftpd[951] 127.0.0.1: DenyFilter 2021-04-14 08:13:45,909 DietPi proftpd[951] 127.0.0.1: DefaultRoot 2021-04-14 08:13:45,910 DietPi proftpd[951] 127.0.0.1: RootLogin 2021-04-14 08:13:45,910 DietPi proftpd[951] 127.0.0.1: UserID 2021-04-14 08:13:45,910 DietPi proftpd[951] 127.0.0.1: UserName 2021-04-14 08:13:45,910 DietPi proftpd[951] 127.0.0.1: GroupID 2021-04-14 08:13:45,910 DietPi proftpd[951] 127.0.0.1: GroupName 2021-04-14 08:13:45,910 DietPi proftpd[951] 127.0.0.1: Umask 2021-04-14 08:13:45,910 DietPi proftpd[951] 127.0.0.1: DirUmask 2021-04-14 08:13:45,910 DietPi proftpd[951] 127.0.0.1: AllowOverwrite 2021-04-14 08:13:45,911 DietPi proftpd[951] 127.0.0.1: TransferLog 2021-04-14 08:13:45,911 DietPi proftpd[951] 127.0.0.1: SystemLog 2021-04-14 08:13:45,911 DietPi proftpd[951] 127.0.0.1: WtmpLog 2021-04-14 08:13:45,911 DietPi proftpd[951] 127.0.0.1: QuotaEngine 2021-04-14 08:13:45,911 DietPi proftpd[951] 127.0.0.1: Ratios 2021-04-14 08:13:45,911 DietPi proftpd[951] 127.0.0.1: DelayEngine 2021-04-14 08:13:45,912 DietPi proftpd[951] 127.0.0.1: mod_facl/0.6: registered 'facl' FS 2021-04-14 08:13:45,921 DietPi proftpd[951] 127.0.0.1: mod_tls/2.7: generating initial TLS session ticket key 2021-04-14 08:13:45,924 DietPi proftpd[951] 127.0.0.1: ROOT PRIVS at mod_tls.c:4815 2021-04-14 08:13:45,927 DietPi proftpd[951] 127.0.0.1: RELINQUISH PRIVS at mod_tls.c:4818 2021-04-14 08:13:45,930 DietPi proftpd[951] 127.0.0.1: mod_tls/2.7: scheduling new TLS session ticket key every 3600 secs 2021-04-14 08:13:45,935 DietPi proftpd[951] 127.0.0.1: mod_lang/1.1: binding to text domain 'proftpd' using locale path '/usr/share/locale' 2021-04-14 08:13:45,936 DietPi proftpd[951] 127.0.0.1: mod_lang/1.1: using locale files in '/usr/share/locale' 2021-04-14 08:13:45,939 DietPi proftpd[951] 127.0.0.1: mod_lang/1.1: skipping possible language 'ko_KR': not supported by setlocale(3); see `locale -a' 2021-04-14 08:13:45,943 DietPi proftpd[951] 127.0.0.1: mod_lang/1.1: skipping possible language 'bg_BG': not supported by setlocale(3); see `locale -a' 2021-04-14 08:13:45,945 DietPi proftpd[951] 127.0.0.1: mod_lang/1.1: skipping possible language 'ja_JP': not supported by setlocale(3); see `locale -a' 2021-04-14 08:13:45,948 DietPi proftpd[951] 127.0.0.1: mod_lang/1.1: skipping possible language 'en_US': not supported by setlocale(3); see `locale -a' 2021-04-14 08:13:45,951 DietPi proftpd[951] 127.0.0.1: mod_lang/1.1: skipping possible language 'fr_FR': not supported by setlocale(3); see `locale -a' 2021-04-14 08:13:45,954 DietPi proftpd[951] 127.0.0.1: mod_lang/1.1: skipping possible language 'es_ES': not supported by setlocale(3); see `locale -a' 2021-04-14 08:13:45,958 DietPi proftpd[951] 127.0.0.1: mod_lang/1.1: skipping possible language 'zh_TW': not supported by setlocale(3); see `locale -a' 2021-04-14 08:13:45,960 DietPi proftpd[951] 127.0.0.1: mod_lang/1.1: skipping possible language 'zh_CN': not supported by setlocale(3); see `locale -a' 2021-04-14 08:13:45,964 DietPi proftpd[951] 127.0.0.1: mod_lang/1.1: skipping possible language 'it_IT': not supported by setlocale(3); see `locale -a' 2021-04-14 08:13:45,968 DietPi proftpd[951] 127.0.0.1: mod_lang/1.1: skipping possible language 'ru_RU': not supported by setlocale(3); see `locale -a' 2021-04-14 08:13:45,971 DietPi proftpd[951] 127.0.0.1: ROOT PRIVS at mod_log.c:2151 2021-04-14 08:13:45,974 DietPi proftpd[951] 127.0.0.1: RELINQUISH PRIVS at mod_log.c:2154 2021-04-14 08:13:45,976 DietPi proftpd[951] 127.0.0.1: ROOT PRIVS at mod_rlimit.c:555 2021-04-14 08:13:45,978 DietPi proftpd[951] 127.0.0.1: RELINQUISH PRIVS at mod_rlimit.c:558 2021-04-14 08:13:45,980 DietPi proftpd[951] 127.0.0.1: set core resource limits for daemon 2021-04-14 08:13:45,981 DietPi proftpd[951] 127.0.0.1: ROOT PRIVS at mod_auth_unix.c:1338 2021-04-14 08:13:45,986 DietPi proftpd[951] 127.0.0.1: RELINQUISH PRIVS at mod_auth_unix.c:1341 2021-04-14 08:13:45,989 DietPi proftpd[951] 127.0.0.1: retrieved group ID: 1000 2021-04-14 08:13:45,991 DietPi proftpd[951] 127.0.0.1: setting group ID: 1000 2021-04-14 08:13:45,993 DietPi proftpd[951] 127.0.0.1: SETUP PRIVS at main.c:2594 2021-04-14 08:13:45,994 DietPi proftpd[951] 127.0.0.1: ROOT PRIVS at main.c:1862 2021-04-14 08:13:45,995 DietPi proftpd[951] 127.0.0.1: deleting existing scoreboard '/run/proftpd.scoreboard' 2021-04-14 08:13:45,996 DietPi proftpd[951] 127.0.0.1: opening scoreboard '/run/proftpd.scoreboard' 2021-04-14 08:13:45,998 DietPi proftpd[951] 127.0.0.1: RELINQUISH PRIVS at main.c:1889 2021-04-14 08:13:46,002 DietPi proftpd[951] 127.0.0.1: ROOT PRIVS at mod_ctrls_admin.c:1632 2021-04-14 08:13:46,002 DietPi proftpd[951] 127.0.0.1: opening scoreboard '/run/proftpd.scoreboard' 2021-04-14 08:13:46,005 DietPi proftpd[951] 127.0.0.1: RELINQUISH PRIVS at mod_ctrls_admin.c:1634 2021-04-14 08:13:46,007 DietPi proftpd[951] 127.0.0.1: ROOT PRIVS at inet.c:409 2021-04-14 08:13:46,008 DietPi proftpd[951] 127.0.0.1: RELINQUISH PRIVS at inet.c:459 2021-04-14 08:13:46,009 DietPi proftpd[951] 127.0.0.1: Failed binding to ::, port 21: Address already in use 2021-04-14 08:13:46,011 DietPi proftpd[951] 127.0.0.1: Check the ServerType directive to ensure you are configured correctly 2021-04-14 08:13:46,011 DietPi proftpd[951] 127.0.0.1: Check to see if inetd/xinetd, or another proftpd instance, is already using ::, port 21 2021-04-14 08:13:46,011 DietPi proftpd[951] 127.0.0.1: Unable to start proftpd; check logs for more details Debug via strace proftpd | grep -E "SOCKET|sock" getpeername(0, 0xbe8a6c1c, [16]) = -1 ENOTSOCK (Socket operation on non-socket) socket(AF_UNIX, SOCK_DGRAM, 0) = 3 socket(AF_UNIX, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 3 connect(3, {sa_family=AF_UNIX, sun_path="/var/run/nscd/socket"}, 110) = -1 ENOENT (No such file or directory) socket(AF_UNIX, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 3 connect(3, {sa_family=AF_UNIX, sun_path="/var/run/nscd/socket"}, 110) = -1 ENOENT (No such file or directory) socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) = 4 getsockopt(4, SOL_SOCKET, SO_RCVBUF, [131072], [4]) = 0 getsockopt(4, SOL_SOCKET, SO_SNDBUF, [16384], [4]) = 0 socket(AF_UNIX, SOCK_STREAM, 0) = 4 bind(4, {sa_family=AF_UNIX, sun_path="/run/test.sock"}, 110) = 0 unlink("/run/test.sock") = 0 socket(AF_UNIX, SOCK_STREAM, 0) = 4 bind(4, {sa_family=AF_UNIX, sun_path="/run/proftpd.sock"}, 110) = -1 EADDRINUSE (Address already in use) write(2, "2021-04-14 11:08:40,739 DietPiHo"..., 1292021-04-14 11:08:40,739 DietPi proftpd[2682]: mod_ctrls/0.9.5: error: unable to bind to local socket: Address already in use socket(AF_UNIX, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 6 connect(6, {sa_family=AF_UNIX, sun_path="/var/run/nscd/socket"}, 110) = -1 ENOENT (No such file or directory) socket(AF_UNIX, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 6 connect(6, {sa_family=AF_UNIX, sun_path="/var/run/nscd/socket"}, 110) = -1 ENOENT (No such file or directory) socket(AF_UNIX, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 5 connect(5, {sa_family=AF_UNIX, sun_path="/var/run/nscd/socket"}, 110) = -1 ENOENT (No such file or directory) socket(AF_UNIX, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 5 connect(5, {sa_family=AF_UNIX, sun_path="/var/run/nscd/socket"}, 110) = -1 ENOENT (No such file or directory) socket(AF_UNIX, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 4 connect(4, {sa_family=AF_UNIX, sun_path="/var/run/nscd/socket"}, 110) = -1 ENOENT (No such file or directory) socket(AF_UNIX, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 4 connect(4, {sa_family=AF_UNIX, sun_path="/var/run/nscd/socket"}, 110) = -1 ENOENT (No such file or directory) socket(AF_NETLINK, SOCK_RAW|SOCK_CLOEXEC, NETLINK_ROUTE) = 4 getsockname(4, {sa_family=AF_NETLINK, nl_pid=2682, nl_groups=00000000}, [12]) = 0 socket(AF_INET, SOCK_DGRAM|SOCK_CLOEXEC, IPPROTO_IP) = 4 getsockname(4, {sa_family=AF_INET, sin_port=htons(44402), sin_addr=inet_addr("127.0.0.1")}, [28->16]) = 0 getsockname(4, {sa_family=AF_INET, sin_port=htons(40796), sin_addr=inet_addr("127.0.0.1")}, [28->16]) = 0 socket(AF_INET, SOCK_DGRAM|SOCK_CLOEXEC|SOCK_NONBLOCK, IPPROTO_IP) = 4
The strace output indicates that the error is caused by the attempt to create /run/proftpd.sock, which apparently already exists. Try fuser /run/proftpd.sock to see if any process is holding onto it; it will report the PID numbers of any such processes. Then use ps -fp <PID number here> to get more information about the process(es) in question. If it's systemd, you might need to do something like systemctl stop proftpd.socket; systemctl disable proftpd.socket to get rid of it. (In this case, DietPi's default ProFTPD configuration might have been tailored to use systemd's socket activation mechanism - essentially a mechanism that can replace the classic inetd/xinetd in running the FTP daemon on-demand only. As you seem to want to run ProFTPD as a classic stand-alone service, you would need to disable systemd's socket for it.) If it's some other process, you might want to kill it and figure out how to prevent it from getting started again. But if fuser lists no processes at all, it might be that the /run/proftpd.sock is simply a left-over from an earlier test run that did not start correctly; in that case, run rm /run/proftpd.sock and try systemctl start proftpd.service again.
proFTPD not working due to socket bind error
1,317,280,996,000
When I upload my backup using ftp to my backup server, my upload speed get to 1.5Gb/s which is good! BUT I see strange things on download traffic!! When my upload speed get bigger than 1Gb/s I see download traffic upto 10Mb/s. I don't know whats really going on here!? there is no app which cause this download traffic and they start together when ftp start tranfering! I use nload to see traffic Any explanation?
The chances are that the unexpected inbound traffic is simply acknowledgements of the data you're uploading. Assuming it's TCP/IP rather than UDP, of course. I would have thought 100:1 a little strong, but it is quite plausible. Best thing might be to run something like ntop to see whether the incoming traffic addresses match the outbound traffic.
Got Download Traffic While Uploading With High Speed
1,317,280,996,000
How can I inspect ftpd(the regular Linux FTP server's) source code on my machine? I'm using Debian 9. Unfortunately I can't figure out which exactly implementation has the package ftpd which I previously installed through apt install ftpd.
Since you’re using Debian, if your repositories are set up correctly (and they are by default), apt-get source ftpd will download and extract the source code for your ftpd package in a sub-directory of the current directory — on Debian 9, that will be ftpd-0.17-36. This works for any package downloaded from the Debian repositories, as long as the corresponding deb-src entries are present in your repository configuration.
Where can I find the source code of ftpd?
1,317,280,996,000
I've gone through all the possible threads and answers and still I'm not sure how to achieve this. Is following scenario even possible in the first place? I've ubuntu 16.04 server which has OpenVPN installed and running. I can connect to this VPN through Windows client. The server also hosts some files that I need an access to, and I would like to use FTP for time being (SFTP is better, I know). Can I make it so, that the FTP port is only accessible through my VPN connection? Because I'm not sure and this same question has been asked hundreths of times, could someone point me in the right direction?
You have several options. Configure your FTP server to listen only your OpenVPN interface/address. Use iptables to filter out traffic from other than OpenVPN interface. If your default policy is DROP and your OpenVPN interface name is tun0 to accept all traffic from OpenVPN interface: iptables -A INPUT -i tun0 -j ACCEPT or alternatively accept matching by your OpenVPN network, for example in case your OpenVPN configuration uses addresses in 10.8.0.0/24 network: iptables -A INPUT -s 10.8.0.0/24 -j ACCEPT If you want more specific rules to limit the rules for only FTP, you need to match FTP ports and for passive FTP also use RELATED target. Quite good explanation is provided by this answer on server fault. Configuring matching rule by port number is simpler for SSH (and SFTP) since it doesn't use any other incoming ports. When your default policy for OUTPUT chain is DROP you need to also have similar rules to allow outgoing traffic (-A OUTPUT -o tun0 -j ACCEPT and -A OUTPUT -d 10.8.0.0/24 -j ACCEPT respectively)
Allow FTP only through VPN
1,317,280,996,000
I have a couple of FTP users who can access my (pseudo) server running vsftpd under Mint Linux. They are restricted to a specific directory and below via chroot(). Whilst they can access the files without problem they are unable to delete them when they are no longer required. The permissions on their base directory, and all sub-directories is 'drwxrwxrwx' and all files '_rw_rw_rw_'. I have searched the web but been unable to find an explanation.
Are the users local users or vsftp users ? Try the following: grep local_enable /etc/vsftpd/vsftpd.conf That should be set to YES and the users should log in with local accounts for this to work. If you do not want this or already have this set, please paste your /etc/vsftpd/vsftpd.conf here.
Unable to delete files via FTP?
1,317,280,996,000
I'm using a raspberry pi3 and VSFTPD to share a directory via FTP, which allows a camera to connect to it and transfer photos. I also created a simple user & password and chrooted the directory so there's no file browsing outside of the dedicated folder if using FileZilla or any other tool. The problem is, if i log in via the terminal (monitor, keyboard etc, no ssh), using that user, i'm free to go wherever i want, is there a way to prevent this? I already tried: usermod --expiredate 1 passwd -l usermod -s /sbin/nologin But this makes the account unusable. If i search for jail/chroot terminal user, there's only 'ssh' results. Any help will be greatly appreciated. EDIT By account unusable, i mean, it disables logging in via the terminal (which is what i want), but it also prevents connecting via FTP. EDIT 2 The point is to disable everything for 1 user except the FTP folder, i don't need, and don't want that user to do anything. Only FTP, no other protocol, i need to target all/most wi-fi cameras, and FTP is the way to go. SFTP and SSH are disabled.
If you want to disable SSH (including scp & sftp) logins for a particular user, you could simply add DenyUsers <name of the ftp-only user> to /etc/ssh/sshd_config and restart the sshd daemon. It will leave the possibility of using the FTP-only account to log in locally, but if the system is in a secure location, that might actually be useful for troubleshooting FTP transfer failures. The traditional way to create FTP-only user accounts would be to set the user's shell to /sbin/nologin, /bin/false, or any similar program that does not allow input and exits immediately, but also list that program in /etc/shells. The classic check done by the FTP daemon after the password check is "does this user have a shell that is listed in /etc/shells?". On modern systems, this may be implemented via the PAM configuration instead of by the FTP daemon itself. By configuring the user with a "do-nothing" shell that is valid (= is listed in /etc/shells), the account will be valid for FTP use, but any use that requires running a shell will fail because the "do-nothing" program is run instead of any real shell, and it will just exit without accepting any input from the user. Note: if the system has any other network services that are using the system password database, and are not based on running a shell (e.g. an IMAP or POP3 mail server), then you may have to configure those services to also explicitly reject the "FTP-only" user.
Disable terminal access but keep FTP
1,317,280,996,000
Currently, I am trying to run ftp commands from telnet client. I was successful with USER, PASS, PASV, LIST and when tried PORT vsftp server is throwing 500 Illegal PORT command. I am following the syntax as specified in RFC 959 DATA PORT (PORT) The argument is a HOST-PORT specification for the data port to be used in data connection. There are defaults for both the user and server data ports, and under normal circumstances this command and its reply are not needed. If this command is used, the argument is the concatenation of a 32-bit internet host address and a 16-bit TCP port address. This address information is broken into 8-bit fields and the value of each field is transmitted as a decimal number (in character string representation). The fields are separated by commas. A port command would be: PORT h1,h2,h3,h4,p1,p2 where h1 is the high order 8 bits of the internet host address. I tried to check if it is a problem related to vsftpd configuration file using ftp command. It was working fine with passive mode turn off. So why is it throwing error when I run from telnet client ? Below I attach the screenshot of telnet session, and nc session where I am trying to listen for data connection.
In olden days when the world was young, pterodactyls still flew in the sky, a computer that was able to connect to a IP network cost $100,000 (and those were real dollars) and most of the system administrators of those machines knew each other on a first name basis, it was decided to partition the 65535 TCP ports (or is it 65536) into two groups and make the Unix kernel restrict access to the ports below 1024 to "root". Well known ports were allocated in this region. If you saw an incoming connection which came from such a port you could trust it, because you knew that Dick, or John, or Jane would not allow their machine to be hacked. Likewise only trusted programs on your machine could open such ports. Now of course you can buy a computer for about the cost of 3 pints of beer (e.g. Raspberry PI) or less this is no longer true. So the first question from the OP was why he got back an "Invalid port" message, and that is because the daemon is objecting to port number 100, a reserved port. The followup question in the comments is why is it objecting. The answer here is a bit more nebulous. If you ask the system to send the contents of a fifo then you can send arbitrary sequences of bytes with almost any timing. As @A.B indicates this allows you to use an FTPD as part of an attack.
vsftp server returning 500 Illegal PORT command when tried to run raw ftp commands in telnet
1,317,280,996,000
I have a case when I need to move data from an old server: host1 to a new server: host2. The problem is host1 cannot see host2, but I can use another server (localhost) to SSH to both host1 and host2. Imagine it should work like this: host1 -> localhost -> host2 How can I use rsync to copy files between host1 and host2? I tried this command on localhost server but it says The source and destination cannot both be remote. rsync -avz host1:/workspace host2:/rasv1/old_code-de
I ended up with the solution from https://unix.stackexchange.com/users/312074/eblock with scp -3 host1 host2
How to use rsync to copy files between 2 remote servers based on the localhost server? [duplicate]
1,317,280,996,000
How could I modify the script below to include the source directory so that it will know which location to retrieve the files required for transfer? I have a bash script for the purpose of automatically transferring files to a remote server, which occurs once a week. Here is the script: #!/bin/bash HOST=sftp.mydomain.com USER=user PASS=pass ftp -inv $HOST << EOF $USER $PASS cd /d D:\destination\directory put file1.gz put file2.gz put file3.gz bye EOF Files 1-3 are all in source/directory, so I'd like to transfer all files from source/directory to the destination path specified above. However, I believed that because the script is running on the source VM, that I wouldn't need to specify a directory since the script could simply pull it from any folder (rookie mistake, I understand). Alternatively: is there an easier way to use ftp for an entire directory as opposed to simply listing the entire content of the folder?
To change which directory the ftp process sees as the source, either cd there beforehand or lcd there within: 1. cd /source/directory ftp ... 2. ftp ... lcd /source/directory ... lcd (from man ftp) is short for local change directory; it will: Change the working directory on the local machine. "Local" here means the system that you ran ftp from. To put all files from the current local directory to the current remote directory in ftp, use: prompt mput * The prompt FTP command disabled interactive prompting during "multiple" operations, such as mget or mput. Quoting from the FTP man page (or see your local man ftp): Interactive prompting occurs during multiple file transfers to allow the user to selectively retrieve or store files.
Automated FTP Script Not Finding Source Directory
1,317,280,996,000
It used to be that you could force command line FTP to use IPv4 like so: ftp -4 ftp.example.com However, at some point in the relatively recent past the "-4" (and for that matter, the "-6") option seems to have been removed. Despite exhaustively searching the Web (even for the exact error "ftp: 4: unknown option") I can't find out how to, as the old man page reads, "Use only IPv4 to contact any host" and force use of IPv4. Instead I'm forced to wait for the client to time out on the IPv6 in the DNS before trying IPv4, which is waste of time. Is there any other way to accomplish this? And before I get lectured on the insecurity of FTP, I'm aware of that and my options. However, I'm connecting to a very old server with non-critical log-in credentials to retrieve non-sensitive data. My ftp on Xubuntu 14.04 LTS supports the -4 option, but ftp on CentOS 7.7 doesn't.
-4 and -6 are options added by a patch in the Debian version of netkit-ftp; you’ll find these available in any Debian derivative. Fedora, RHEL and CentOS don’t have an equivalent patch, so their ftp doesn’t support these options. To force IPv4, you could try specifying the target IP address rather than the host name.
What happened to the "-4" option for command line FTP?
1,317,280,996,000
Sorry, but this is probably a terribly stupid question. I'm running a Linux2 instance on AWS. I have a number of sites running there, including a couple of wordpress sites. One of the sites wants an FTP account so they can edit their wordpress files directly. I want to create an FTP account and lock it to the site directory. i've read a number of interesting answers here and on other sites (eg How to create a FTP user with specific /dir/ access only on a Centos / linux installation) it looks like every answer says after adding a new user you : chown –R <-username> /var/www/mydomain.com my question is, is this changing the ownership of this directory? is this something i need to do? will this affect the permissions and access of other users? Sorry for the probably silly question, still fairly new to Unix and don't want mess up a live website.
Have a look at groups : https://www.linux.com/learn/intro-to-linux/2017/12/how-manage-users-groups-linux . It basically allows you to set the rights of several users over files and directories. More info : https://www.howtoforge.com/tutorial/linux-groups-command/ https://wiki.archlinux.org/index.php/users_and_groups
adding FTP user without changing owner of directory?
1,317,280,996,000
This is the bash code: ftp -n <ftpadress> <<EOT <credentials> binary put $pathfile$reportfile $remotepath$reportfile put $pathfile$logfile $remotepathlog$logfile quit EOT This is the output: a <files_to_add> put <files_to_add_with_path> <files_to_add_with_remote_path> The parameter is incorrect. I checked the arguments of the put command and they are correct. Finally I check in the FTP and is not there I have two questions: Why is not transferred? What does the "The parameter is incorrect" mean? IMPORTANT NOTE: The file to be uploaded to the FTP contained the colon in the file name
As @Kusalanada suggested, Windows did not like the symbol ":" I changed the name of the file, from file_2019-06-11_14:54:37.tar to file_02019-06-11_145437.tar
File is not sent via ftp - error: "The parameter is incorrect"
1,523,959,599,000
This script will send a file via FTP and then delete it. But, sometimes, the file is deleted before the transmission ends, and then an empty file is received. #!/bin/bash tar czf <sourcefile> --directory=<directory> log ftp -v -n $1 <<END_OF_SESSION user <user> <password> put <sourcefile> <targetfile> bye END_OF_SESSION rm <sourcefile> What would be a good way to synchronize the processes, such that the deletion occurs after the sending has completed? As shown in the update below, the connection sometimes cannot be establish. Notes: Running on Lubuntu 16.04. Updated with the tar line. log information for a failed session: Connected to IP 220 (vsFTPd 3.0.2) 331 Please specify the password. 230 Login successful. Remote system type is UNIX. Using binary mode to transfer files. 200 Switching to Binary mode. local: /home/user01/tmp/log.tgz remote: E1/180418090056 200 PORT command successful. Consider using PASV. 425 Failed to establish connection. 221 Goodbye. and a successful one: Connected to IP 220 (vsFTPd 3.0.2) 331 Please specify the password. 230 Login successful. Remote system type is UNIX. Using binary mode to transfer files. 200 Switching to Binary mode. local: /home/user01/tmp/log.tgz remote: E1/180418090344 200 PORT command successful. Consider using PASV. 150 Ok to send data. 226 Transfer complete. 6901 bytes sent in 0.00 secs (43.5848 MB/s) 221 Goodbye.
The ftp command has no functionality to allow you to check for successful transfer. If you must continue using this implementation of FTP transfer, two alternatives are: Download the transmitted file to a local temporary and compare it byte for byte against the source. Run ls within the FTP client and check that the file length matches expectations. Bear in mind that ls is server dependent, and can vary from server implementation to implementation. The best solution (other than replacing FTP entirely with rsync or scp) is to use a different FTP client that provides a reliable transfer status. #!/bin/bash tar czf <sourcefile> --directory=<directory> log lftp -u '<user>,<password>' -e 'put -E <source> -o <target>; quit' "$1" The lftp command should be available in most Linux distributions. The -E flag configures the put command to act more like mv rather than cp: it deletes the source file after a successful transfer.
Check if a file has been properly transmitted via FTP
1,523,959,599,000
I have a script that generate a log and in the end of the script i move the log to a windows server. The connection between the 2 servers is fine, if i try to send the files manually it works good. Script and logs are in 2 different location. My script is like below: LOGFILE=/home/logs/monitor_sync_FM2.log HOST='xxx.xxx.xxx.xxx' USER='FTPUser' PASSWD='Password' ftp -n $HOST << EOF user $USER $PASSWD binary prompt mput $LOGFILE quit EOF exit 0 but when i run the script i get the error: Filename invalid Can anyone please tell me if i'm missing something in my script? Thanks
You're trying to write the file to the path /home/logs/monitor_sync_FM2.log on the remote server (ie windows). 550 Filename invalid indicates that /home/logs does not exist on the remote server. What you want to do is this: LOGFILE=monitor_sync_FM2.log HOST='xxx.xxx.xxx.xxx' USER='FTPUser' PASSWD='Password' cd /home/logs/ ftp -n $HOST << EOF user $USER $PASSWD binary prompt mput $LOGFILE quit EOF exit 0
550 Filename invalid
1,523,959,599,000
I broke my debian permissions by doing: chmod 770 /etc as root. I know its almost impossible to fix this without reinstalling but is it atleast possible to backup the files I created? I still have root access at the moment with putty. I tried to copy files with sftp in filezilla what used to work fine before but broke after this command. Is there a way to solve the sftp permissions so I can backup some files I can't lose? Or isn't this possible anymore? Whats the best way to still backup my files? EDIT: SSH is working again. When connecting to SSH I get like 20 messages with: -bash: /dev/null: Permission denied. I'm still unable to connect with SFTP at the moment after login into SFTP the server disconnects instantly.
Change permissions to /etc to 755. Then go to /etc/ssh folder and change permissions according to these below: -rw-r--r--. 1 root root 242153 Mar 16 2016 moduli -rw-r--r--. 1 root root 2208 Mar 16 2016 ssh_config -rw-------. 1 root root 6702 Jun 28 16:36 sshd_config -rw------- 1 root ssh_keys 227 Jun 28 16:36 ssh_host_ecdsa_key -rw-r--r-- 1 root root 162 Jun 28 16:36 ssh_host_ecdsa_key.pub -rw------- 1 root ssh_keys 387 Jun 28 16:36 ssh_host_ed25519_key -rw-r--r-- 1 root root 82 Jun 28 16:36 ssh_host_ed25519_key.pub -rw------- 1 root ssh_keys 1679 Jun 28 16:36 ssh_host_rsa_key -rw-r--r-- 1 root root 382 Jun 28 16:36 ssh_host_rsa_key.pub Then restart ssh service and check if it's working. EDIT: You can also try: cd /etc python -m SimpleHTTPServer 8080 This will allow you to access files in /etc via browser on port 8080
Debian backup files after chmod 770 in /etc
1,523,959,599,000
I'm actually having error on this setup. I'm getting 500 Illegal PORT command. 425 Use PORT or PASV first when go using command PUT. I'm currently using CENTOS 7.2 Here's my vsftpd.conf: anonymous_enable=NO listen_port=58021 local_enable=YES write_enable=YES local_umask=022 dirmessage_enable=YES xferlog_enable=YES xferlog_std_format=YES listen=NO listen_ipv6=YES pam_service_name=vsftpd userlist_enable=YES tcp_wrappers=YES dirlist_enable=yes pasv_enable=yes pasv_min_port=58022 pasv_max_port=58026 write_enable=yes local_root=/mnt/webcollab/super/ Already tried the fix on the internet and forums. http://www.linuxquestions.org/questions/linux-networking-3/vsftpd-425-error-57491/ and same what I saw here in exchange and still having issue. Thanks!
I think there is a problem with your ftp client. The PORT command is sent by the FTP client. If the supplier sends a PORT command with a 10.x.x.x address all the way to the server that will never work because the 10.x.x.x is a private range. There are only two ways that a client can send 10.x.x.x in a port command and expect it to work: The client is on the same private 10.x.x.x network as the server. The firewall on the client's network is protocol-aware of FTP and inspects the control channel in real time and actually replaces the 10.x.x.x internal address with an external IP address so that the server will actually see the external address rather than the client's internal 10.x.x.x address. Read the complete thread here on this issue For a quick check , disable the firewall and selinux on ftp server temporarily to see it is related to that or not.
Setting up FTP Server (Passive) vsFTPd CENTOS7 issue
1,523,959,599,000
I am quite new to using linux and I am wondering if, in most circumstances there is a default ssh and or ftp server running. I have started a few virtual machines and I have always been able to ssh into the computer, but I am not sure if this is the case (meaning that I can expect to connect to them or do I have to install or start a program)
It's hard to ask a question about linux distros in general, because some might... but, this is my experience with it, using primarily Ubuntu 14.04 server every day at work, setting up and configuring servers. There's not one that comes preinstalled (unless you select it at the last point of the install process - install software), but the most popular and used ssh server/client (at least afaik) is OpenSSH. As for FTP, no there isn't one that's preinstalled either, but the one at that same install software menu in the installation is vsftpd. If you tell me what linux distro you're using, I can do a little more digging to see if one comes preinstalled on that, but from my experience, not necessarily. :)
Is there a default ssh server or ftp server running on most linux distros?
1,523,959,599,000
I've installed vsftpd and filezilla locally on my pc(ubuntu 12.04) I can enter ftp via filezilla using 0.0.0.0 and it works. How can i access to ftp from outside of my computer? What address should i use? vsftpd.config listen=YES anonymous_enable=YES local_enable=YES write_enable=YES local_umask=022 dirmessage_enable=YES use_localtime=YES xferlog_enable=YES connect_from_port_20=YES secure_chroot_dir=/var/run/vsftpd/empty pam_service_name=vsftpd rsa_cert_file=/etc/ssl/private/vsftpd.pem
You can still bind it to 0.0.0.0, however you have to open the port with your firewall interface (depends, probably ufw or iptables on ubuntu). 0.0.0.0 binds to all interfaces (localhost as well as for example your ethernet interface). I recommend to search in the ubuntu documentation about firewalls. If your computer is behind a commonly configured home router and you want to make it available on the internet, you also have to set up a port forward on your router to your pc, this procedure varies from router to router. You also have to be aware of the security or legal issues of running a public ftp server, especially if you use anonymous_enable=YES.
vsftpd and filezilla
1,523,959,599,000
I installed git on my server and cloned my development repository using the root user. The permissions appeared to have been set to: Files: -rw-r--r-- Directories: drwxr-xr-x I previously had been using an FTP client to upload files. I need to still be able to use the FTP editor sometimes, but now when I try uploading a file with it I receive the error: Can't open that file: Permission denied I've been reading a lot of content on Linux permissions, but I'm just very confused what I should set the file and directory permissions to in order for my FTP client to work again. A lot of the information I've been reading seems conflicting.. For example, some websites say to just change the permissions to 777, but other say not to as that allows any user on the system total control of the files. Would the correct approach here be to change the "owner" of the files from what I assume is root to my FTP user?
Would the correct approach here be to change the "owner" of the files from what I assume is root to my FTP user? Yes. As root, run chown -R yourftpuser:yourftpgroup /path/to/tree. Your permissions will be fine for FTP at that point. All files will be owned by the FTP user, who has write access¹. In future, you can avoid this by cloning as the user you want to end up owning the files. Note though that all the files are readable by any user on the server with the permissions you posted. If that's a problem, you can remove the r permission from group and other: chmod -R go-r /path/to/tree. ¹ This assumes that your FTP server is set up to use system user accounts for login. They don't all behave that way, and it may be that you need to use its internal account system or change the file owner to the server's user.
What should file permissions be set to in order for an FTP editor to work?
1,523,959,599,000
I tried hard but not able to give access of web directory to other user for FTP. Below are the directories which are under directory /var/www/html 1) nice_call 2) poor_call 3) great_call /var/www/html is owned by apache user as well as all above mentioned directories is also owned by apache. I just want to give read+write access to one more user named as ftp_user for directory poor_call not for other directories. I am also ready to give 777 access to directory poor_call to resolve this issue. Please note that I am having root access to execute any command to resolve this issue. It will be great if someone can help on it.parent-directory
Set up your user ftp_user so that they can FTP successfully into their home directory. Assuming you're using vsftp as your FTP server; you'll need the following as a minimum in your /etc/vsftpd.conf: anonymous_enable=NO local_enable=YES write_enable=YES Within the user's home directory create a directory called (eg) poor_call. Then, bind mount /var/www/poor_call onto this newly created directory: # mount --bind /var/www/poor_call /home/ftp_user/poor_call After running the above command, /var/www/poor_call becomes accessible from /home/ftp_user/poor_call. Once you've confirmed that it works, add the following to your /etc/fstab to make the bind mount permanent over reboots: /var/www/poor_call /home/ftp_user/poor_call none bind 0 0
How to give apache web directory access to other FTP user where parent directory is restricted
1,523,959,599,000
Is it possible to hide the php files or hide a folder with php files from FTP user accessing? But the files can be executed on a webserver?
You may hide those files using the HideFiles directive. Then you must treat those files as if they don't exist, using the IgnoreHidden directive.
hiding a php file/folder with php file from ftp user
1,523,959,599,000
I need to connect to a Remote FTP host via linux command line. When I try #ftp <IP address>, it prompts for username then password.. But I would like to pass username and password in single line, so remote host should not prompt username or password.. I tried passing as #ftp <ip address> <pw.cfg>, but no success. May I know the command how to pass username and password along with IP address to FTP remote host.. Ideally i will be using this command from a linux script. pw.cfg: text file contains password
You can use .netrc file in your home directory. In the file enter record like: machine machine_name user username password your_password and when you enter: ftp machine_name ftp will user user_name to login and your_password as password
Could not pass the username password for FTP command from linux
1,523,959,599,000
I want to create an anonymous ftp. The user can upload and read only from pub directory. The OS is Unixware 71, the ftp server is WU-FTPD. This is the ftpaccess file #ident "@(#)unixsrc:usr/src/common/cmd/cmd-inet/usr.sbin/in.ftpd/ftpaccess /main/1" # # ftpd configuration file, see ftpaccess(4tcp). # loginfails 3 passwd-check trivial warn class all anonymous,guest,real * compress yes all tar yes all readme README* login all readme README* cwd=* all #banner /var/ftp/banner.msg #message /var/ftp/welcome.msg login all #message .message cwd=* all #chmod no anonymous delete yes anonymous,guest #overwrite no anonymous,guest #rename no anonymous,guest #umask no anonymous #limit all 5 Wk0900-1800 /var/ftp/toomany.msg #limit all 20 SaSu|Any1800-0900 /var/ftp/toomany.msg log commands anonymous,guest,real log transfers anonymous,guest,real inbound,outbound #path-filter anonymous,guest /var/ftp/filename.msg ^[[:alnum:]-._]*$ ^[.-] #upload /home/ftp * no nodirs #upload /home/ftp /pub/incoming yes ftp other 0444 nodirs upload /home/ftp /pub yes ftp other 0755 This are the directories with permissions and owners of home/ftp 1737 0 dr-xr-xr-x 5 root root 96 Mar 29 03:44 /home/ftp/ 41212 0 d--x--x--x 2 root root 96 Mar 29 03:19 /home/ftp/bin 41215 20 ---x--x--x 1 root sys 19904 Mar 29 03:19 /home/ftp/bin/ls 41214 0 d--x--x--x 2 root root 96 Mar 29 03:39 /home/ftp/etc 41216 2 -r--r--r-- 1 root sys 33 Mar 29 03:36 /home/ftp/etc/passwd 41217 2 -r--r--r-- 1 root sys 10 Mar 29 03:39 /home/ftp/etc/group 41219 0 drwxrwxrwt 2 root sys 96 Mar 29 03:44 /home/ftp/pub 41218 2 -rwxr-xr-x 1 ftp other 389 Mar 29 03:44 /home/ftp/pub/cast The server works, I can upload the file "cast" on server as anonymous. But as you can see, is not possible to read (is possible to get if you know the name of the file) ftp -n 192.168.0.133 Connected to 192.168.0.133 (192.168.0.133). 220 unixware1 FTP server (Version wu-2.6.2(2) Wed Dec 21 20:49:51 EST 2005) ready. ftp> user anonymous 331 Guest login ok, send your complete e-mail address as password. Password: 230 Guest login ok, access restrictions apply. ftp> cd pub 250 CWD command successful. ftp> ls 227 Entering Passive Mode (192,168,0,133,245,145) 150 Opening ASCII mode data connection for /bin/ls. 226 Transfer complete. ftp> put cast local: cast remote: cast 227 Entering Passive Mode (192,168,0,133,156,60) 150 Opening ASCII mode data connection for cast. 226 Transfer complete. 390 bytes sent in 2,5e-05 secs (15600,00 Kbytes/sec) ftp> ls 227 Entering Passive Mode (192,168,0,133,209,31) 150 Opening ASCII mode data connection for /bin/ls. 226 Transfer complete. ftp> ls cast 227 Entering Passive Mode (192,168,0,133,95,177) 150 Opening ASCII mode data connection for /bin/ls. 226 Transfer complete. ftp>
Solution found on other forum I had to copy a shared library in the chroot ftp. mkdir -p /home/ftp/usr/lib cp /usr/lib/libc.so.1 /home/ftp/usr/lib
What's the error on my anonymous ftp server? The upload works, the read not
1,523,959,599,000
The following two rules allow for passive transfers which i added as Firewall rules for my FTP server. //The following two rules allow the inbound FTP connection iptables -A INPUT -s $hostIP -p tcp --dport 21 -i eth0 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -d $hostIP -p tcp --sport 21 -o eth0 -m state --state ESTABLISHED -j ACCEPT // The following two rules allow for passive transfers iptables -A INPUT -s $hostIP -p tcp --dport 1024:65535 -i eth0 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -d $hostIP -p tcp --sport 1024:65535 -o eth0 -m state --state ESTABLISHED -j ACCEPT My FTP was server configured by assigning passive port to range "1024:65535" and above rules worked. But now FTP server configured to bind any free port instead fix port range. So what changes required in above two rules? Edit After applying three rules for passive FTP connection mentioned in answer i have rules in following order and now it's stopped working means client is connected but unable to retrieve remote directory. //The following two rules allow the inbound FTP connection iptables -A INPUT -s $hostIP -p tcp --dport 21 -i eth0 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -d $hostIP -p tcp --sport 21 -o eth0 -m state --state ESTABLISHED -j ACCEPT iptables -A PREROUTING -t raw -p tcp -s $hostIP --dport 21 -j CT --helper ftp iptables -A INPUT -m conntrack --ctstate RELATED -m helper --helper ftp -s $hostIP -p tcp -j ACCEPT iptables -A OUTPUT -m conntrack --ctstate ESTABLISHED -m helper --helper ftp -d $hostIP -p tcp -j ACCEPT Working Rules iptables -A PREROUTING -t raw -p tcp -s $hostIP --dport 21 -j CT --helper ftp iptables -A INPUT -i eth0 -p tcp -s $hostIP -m conntrack --ctstate RELATED,ESTABLISHED -m helper --helper ftp -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp -d $hostIP -m conntrack --ctstate ESTABLISHED -m helper --helper ftp -j ACCEPT
I assume $hostIP in your rules means a host you wish to allow FTP access for, otherwise your existing rules won't make sense to me. If you are using unencrypted FTP, you should replace your wide-open FTP data connection rules with connection tracking. First, add a rule that attaches a FTP connection tracking helper to any incoming FTP command connections: iptables -A PREROUTING -t raw -p tcp -s $hostIP --dport $ftpCMDport -d $ftpServerIP -j CT --helper ftp Here, $ftpCMDport is the port in which your local FTP server accepts logins; usually it's port 21. (Historical note: this used to happen automatically for TCP port 21, but it turned out the automatic assignment could be abused, so the automatic assignment of connection tracking helpers was made optional in Linux kernel 3.5, and later the automatic assignment feature was completely removed.) Once the FTP command connections are being monitored by the CT helper, the firewall will "know" which ports should be allowed for legitimate FTP data connections. You'll need two more rules to actually use this information to allow incoming data connections: iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -m helper \ --helper ftp -s $hostIP -d $ftpServerIP -p tcp -j ACCEPT iptables -A OUTPUT -m conntrack --ctstate ESTABLISHED,RELATED -m helper \ --helper ftp -s $ftpServerIP -d $hostIP -p tcp -j ACCEPT Together, these three rules should entirely replace your existing two rules for FTP data connections. These should be good for both active and passive connections, should your FTP server allow both types. If you only want to accept passive FTP data connections, you might want to remove the RELATED in the OUTPUT rule. This was based on: https://home.regit.org/netfilter-en/secure-use-of-helpers/ If you are using SSL/TLS encrypted FTP, then the connection tracking helper won't be able to make sense of the encrypted FTP command traffic, and so if the FTP server will accept data connections in any free port, you cannot effectively firewall traffic by TCP ports at all, since any TCP port could become a FTP data port for some connection. Your only possibility would then be to limit traffic by IP addresses: iptables -A INPUT -s $hostIP -p tcp -i eth0 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -d $hostIP -p tcp -o eth0 -m state --state ESTABLISHED -j ACCEPT Note that these are essentially your existing rules, with the --dport and --sport options removed.
How to make Firewall rules for Passive FTP which uses dynamic port?
1,523,959,599,000
I am setting up my home camera to record to my NAS server (CentOS 8 Stream). I am using FTP for the transfer. I installed vsftpd and created a dedicated user (ftpuser). I want the files to be transferred to /data/recordings, which is also a Samba share: \mynas\recordings. When I set the destination for the FTP transfer from the camera's UI it always assumes I am starting from the user's home directory. So /data/recordings/camera1 becomes /home/ftpuser/data/recordings/camera1. There is no way to change this behavior. As a workaround I changed ftpuser's home directory to /data/recordings. This will only work if I set the label for the recordings folder to user_home_dir_t: semanage fcontext -a -t user_home_dir_t /data/recordings restorecon -F -R -v /data/recordings The problem is that breaks the Samba share because the label was originally samba_share_t. If I turn off SELinux everything works perfectly, so the issue is 100% the labeling of the folder. I've done some googling and it looks like using a boolean might be the solution, but I can't figure out which one to use. I've tried a couple, but nothing seems to make a difference. Any assistance would be great. Thanks!
This should solve it without the need to change the security context of files: sudo setsebool -P samba_export_all_ro=1 sudo setsebool -P samba_export_all_rw=1 (probably the second command alone is enough, I've not tested).
SELinux issue with FTP/Samba/Home directory
1,624,460,440,000
I know ftp client(classical ftp client of Unix, for example kftp or netkit-ftp binary) use the file $HOME/.netrc. The syntax is really simple machine ftpservername login myname password ******* But if I want for example to turn on or off the prompt(prompt off command) there is no $HOME/.ftprc? Is possible to put configuration on $HOME/.netrc?
Solution found. First the only rc file of ftp client is $HOME/.netrc there is not ftprc at least under netkit-ftp. For commands is really simple, we can define and use macros. cat $HOME/.netrc machine ftp.myserver.priv login anonymous password anonymous macdef macrodef1 prompt off The last line must be empty or return error. To run the macro on ftp client we can use $ $macrodef1 To run the macro echo "\$ macrodef1" | ftp -v ftp.myserver.priv If we want a macro which run automatically we can define a configuration for specific machine, the macro is called "init" and is recognized for autorun by ftp client. machine yourmachine.com login youruser password yourpass macdef init passive on prompt off
ftp client: no rc file?
1,624,460,440,000
I've created a user jdoe and I wanted to map such a user to apache user. So everytime that I upload a file could be owned by apache. This is my /etc/vsftpd/vsftpd.conf file: listen=YES listen_ipv6=no anonymous_enable=NO local_enable=YES write_enable=YES dirmessage_enable=YES use_localtime=YES xferlog_enable=YES connect_from_port_20=YES ascii_upload_enable=YES ascii_download_enable=YES chroot_local_user=YES chroot_list_enable=YES ls_recurse_enable=YES pam_service_name=vsftpd rsa_cert_file=/etc/vsftpd/www.example.com/fullchain1.pem rsa_private_key_file= /etc/vsftpd/www.example.com/privkey1.pem pasv_enable=Yes pasv_min_port=1030 pasv_max_port=1035 ssl_enable=yes debug_ssl=yes force_local_logins_ssl=YES force_local_data_ssl=YES allow_anon_ssl=no ssl_ciphers=HIGH ssl_tlsv1=YES ssl_sslv2=NO ssl_sslv3=NO allow_writeable_chroot=YES guest_enable=YES chmod_enable=YES chown_uploads=YES chown_username=apache guest_username=apache hide_ids=YES user_config_dir=/etc/vsftpd And I have in /etc/vsftpd/jdoe : local_root=/var/www But when I upload o create a file I get: 550 Permission denied (on Filezilla) Am I doing something wrong? Is what I'm looking for feasible?
Finally I've found a way: listen=YES listen_ipv6=no anonymous_enable=NO local_enable=YES write_enable=YES dirmessage_enable=YES use_localtime=YES xferlog_enable=YES log_ftp_protocol=YES connect_from_port_20=YES ascii_upload_enable=YES ascii_download_enable=YES chroot_local_user=YES chroot_list_enable=YES ls_recurse_enable=YES pam_service_name=vsftpd rsa_cert_file=/etc/vsftpd/www.example.com.ar/fullchain1.pem rsa_private_key_file= /etc/vsftpd/www.example.com.ar/privkey1.pem pasv_enable=Yes pasv_min_port=1030 pasv_max_port=1035 ssl_enable=yes debug_ssl=yes force_local_logins_ssl=YES force_local_data_ssl=YES allow_anon_ssl=no ssl_ciphers=HIGH ssl_tlsv1=YES ssl_sslv2=NO ssl_sslv3=NO pasv_address=192.168.222.11 guest_enable=YES chown_uploads=YES chown_username=apache guest_username=apache hide_ids=YES user_config_dir=/etc/vsftpd anon_upload_enable=YES anon_mkdir_write_enable=YES anon_other_write_enable=YES anon_umask=0002 And also: in /etc/vsftpd/jdoe: local_root=/var/www/www_example_com_ar
vsftpd: 550 Permission denied trying to uploading/writing files
1,624,460,440,000
We will start to use SFTP instead of FTP for Tibco BW 5 application on RHEL 6. The port 22 is already open. Can i keep the same user account and directory for SFTP? If so, should i change any directory/user permissions?
That's bit too broad question. It depends on what FTP and SFTP server software you are using and how those servers manage the user accounts. But I'll try to answer anyway, to give you some clue, what to look for. If you give us more details, you might get better answers. You might use the same software for both FTP and SFTP. Then the FTP and SFTP will share the accounts. For example ProFTPD supports both FTP and SFTP. You might use a different software for FTP and SFTP. But they can still share the accounts, typically in case they both use operating system accounts. For example OpenSSH SFTP server use operating system accounts. But if your FTP and SFTP server software have their own account management, then you will have to set up new accounts for SFTP.
Migration from FTP to SFTP with the same user
1,624,460,440,000
I'm building a website, which I'm pushing online using Filezilla. But this is monotonous and I'm sure could be done with a script. So far I figured out this much: I connect with the ftp server using ftp mydomain.com I give my credentials and all is great. But this is how my project files look like: asset-manifest.json - file assets - folder favicon.ico - file index.html - file manifest.json - file og-image.png - file service-worker.js - file static - folder So there are folders. From what I saw I cannot push folders using the ftp command. I saw there is something like ncftp, but the syntax is a little weird. I don't see how I could traverse to the right folder. This is the path when I run pwd in the destination folder on the ftp folder: 257 "/websites/uczIchApp" is your current location So how I'd do it locally is more or less this: yarn build mv * -r /websites/myDestFolder How can I replicate it with ftp? I'm open to using other commands instead of ftp.
Ok this is how I ended up doing it: yarn build ncftpput -R -v -u "User" -p "Password" domain /path/to/ build/* I used this answer: https://superuser.com/a/841862
Pushing files and folders to a specific folder on ftp
1,624,460,440,000
There is a remote FTP server, which has my VPN Gateway IP address in white list. I have a name of FTP server, username and password. How can I connect to it? (I need my connection to go through VPN)
# route add <name-or-ip-of-ftp-server> gw <gateway-ip> $ ftp <name-or-ip-of-ftp-server> [<port>] then type your username and password
How to connect to ftp server trough VPN
1,497,627,506,000
An embedded device, running eCos OS, with FTP Lite Client. I want download file from device to local PC via FTP during terminal session. FTP transfer initiates device, it ask to specify the FTP IP address, file name, user name and password. Once details entered, it starts FTP transfer. Device not allow connect FTP client from outside to initiate a file download request, it allow only transfer in one direction, from device to remote FTP host. How to setup FTP application on Ubuntu for this purpose?
How to setup FTP application on Ubuntu for this purpose? You need an FTP server that will allow uploads. I recommend vsftpd; here is a page that explains how to configure it for upload: https://www.centos.org/forums/viewtopic.php?t=39485
FTP download question
1,497,627,506,000
I wanted to share a tar file on my server so that anyone on my network can download it. For example, I would give them a link like 192.168.2.2/windows.tar They should be able to download it from their web browser with that link or wget 192.168.2.2/windows.tar does anyone have any suggestion on what package I can use to approach this. Tutorials would be even better!
There are two possible packages to use -- httpd, or nginx. Both are provided by CentOS 7. httpd is better known as Apache Web Server, and its documentation is available by googling "centos 7 httpd" nginx is a much smaller and lighter web server, and its documentation is available by googling "Centos 7 nginx Either one will do a perfectly good job handling the requirements you've specified. Hint: You will place your "windows.tar" file into a specified server directory. What you also want to do to prevent accessors from seeing any other content you might have in that directory is to "touch index.html" there. This will make a zero-length file which will prevent the nginx or httpd default splash screen from displaying, and won't give away the software you are using -- which hackers want to know if they decide to target your server. But do all of that after you've installed your preferred product. You'll do something like this link (in your favorite web browser): http://192.168.2.2/ What that should do is to show you the default splash screen. At that point, once you have the splash screen visible, you can add your content to replace it.
CentOS 7 Share a file on my network so anyone can download it
1,497,627,506,000
I'm setting up a new linux server and have installed VSFTPD. I can login to FTP fine using the root user, but not using my new user "AMP". I'm using the same password I'd use to login as AMP in SSH so it's not a wrong password. I've looked around and found there is a userlist setting... but I've set it to NO hoping that means I don't have to worry about user lists at all. AMP has a home folder set (/home/AMP)... I'm a bit lost here. I'm sure it's something simple... anyone have any ideas? # Run standalone? vsftpd can run either from an inetd or as a standalone # daemon started from an initscript. listen=NO userlist_enable=NO # # This directive enables listening on IPv6 sockets. By default, listening # on the IPv6 "any" address (::) will accept connections from both IPv6 # and IPv4 clients. It is not necessary to listen on *both* IPv4 and IPv6 # sockets. If you want that (perhaps because you want to listen on specific # addresses) then you must run two copies of vsftpd with two configuration # files. listen_ipv6=YES # # Allow anonymous FTP? (Disabled by default). anonymous_enable=NO # # Uncomment this to allow local users to log in. local_enable=YES # # Uncomment this to enable any form of FTP write command. write_enable=YES
Found the answer here - New local user can't login to vsftpd Added the /bin/false shell to my user in /etc/passwd ... and then added that shell to the list in /etc/shells. Tried to login and it worked!!!
Login incorrect when trying to login to VSFTPD
1,497,627,506,000
I am attempting to download a folder where it has 777 permissions as well as its subfolders, but when I try to use get in ftp, It brings back the error: 550 Failed to open file Iv'e checked the permissions: ubuntu@ubuntu:~/sourceDev/7-1-15_asm/4$ ls -l total 1748 -rwxrwxrwx 1 ubuntu ubuntu 164 Jul 5 09:21 brianThread.hpp -rwxrwxrwx 1 ubuntu ubuntu 1741092 Jul 5 09:21 brianThread.hpp.gch -rwxrwxrwx 1 ubuntu ubuntu 9239 Jul 3 13:58 brianThread.so -rwxrwxrwx 1 ubuntu ubuntu 9147 Jul 5 09:48 main -rwxrwxrwx 1 ubuntu ubuntu 236 Jul 5 19:46 main.cpp -rwxrwxrwx 1 ubuntu ubuntu 9147 Jul 5 19:46 main.out ubuntu@ubuntu:~/sourceDev/7-1-15_asm/4$ cd .. ubuntu@ubuntu:~/sourceDev/7-1-15_asm$ ls 1 2 3 4 5 ubuntu@ubuntu:~/sourceDev/7-1-15_asm$ cd .. ubuntu@ubuntu:~/sourceDev$ ls -l total 4 drwxrwxrwx 7 ubuntu ubuntu 4096 Jul 3 13:59 7-1-15_asm ubuntu@ubuntu:~/sourceDev$ cd * ubuntu@ubuntu:~/sourceDev/7-1-15_asm$ ls -l total 20 drwxrwxrwx 2 ubuntu ubuntu 4096 Jul 3 16:59 1 drwxrwxrwx 2 ubuntu ubuntu 4096 Jul 2 16:49 2 drwxrwxrwx 2 ubuntu ubuntu 4096 Jul 3 05:06 3 drwxrwxrwx 2 ubuntu ubuntu 4096 Jul 5 19:46 4 drwxrwxrwx 2 ubuntu ubuntu 4096 Jul 3 14:00 5 Here's what I've tried to copy the files: ftp> cd 7-1-15_asm 250 Directory successfully changed. ftp> ls 200 PORT command successful. Consider using PASV. 150 Here comes the directory listing. drwxrwxrwx 2 1000 1000 4096 Jul 03 16:59 1 drwxrwxrwx 2 1000 1000 4096 Jul 02 16:49 2 drwxrwxrwx 2 1000 1000 4096 Jul 03 05:06 3 drwxrwxrwx 2 1000 1000 4096 Jul 05 19:46 4 drwxrwxrwx 2 1000 1000 4096 Jul 03 14:00 5 226 Directory send OK. ftp> get 4 local: 4 remote: 4 200 PORT command successful. Consider using PASV. 550 Failed to open file. ftp> get 4 /home/brian local: /home/brian remote: 4 200 PORT command successful. Consider using PASV. 550 Failed to open file. ftp> What am I doing wrong that prevents me from copying a folder from the ftp server to my computer. Both of the computers are on the same network. In a summary I want to know how to copy /home/ubuntu/sourceDev/7-1-15_asm/4 (which is a folder) and its contents from the computer with the ftp server to the computer issuing the ftp command. Also what are the minimum permissions I can have on a folder and its contents, while it being still possible to copy it to another computer using the ftp command?
FTP clients aren't particularly good at downloading whole directories. Try using wftp with its useful "-r" option. "-l 0" ensures it isnt limited to 5 levels of directory. wget -r -l 0 ftp://user:[email protected]/7-1-15_asm/4
How do I retrive files using an internet connection and commands?
1,497,627,506,000
Host is 11iV3. I am trying to log ftp transfert outside syslog.log in /etc/inetd.conf ftp stream tcp6 nowait root /usr/lbin/ftpd ftpd -u 022 -o -i -X where -i/-o tell ftpd to log input/output -X tell ftpd to use syslogd facilities in syslogd.conf mail.debug /var/adm/syslog/mail.log *.info;mail.none;local5.none /var/adm/syslog/syslog.log local5.* /var/adm/syslog/t-LOCAL5.log mail.info /var/adm/syslog/t-MAIL.log *.alert /dev/console *.alert root *.emerg * log of ftp activities goes to xxferlog (as instructed by ftpd), but not to any user defined log. How can i redirect ftp's log ?
Solution: Inetd 1) you must log ftp/fptd traffic. in /etc/inetd.conf add following arguments ftpd -l -v -o -i -W where -l log traffic to syslog.log -v add verbosity -i log incomming files (to /var/adm/syslog/xferlog) -o log outgoing files (to /var/adm/syslog/xferlog) -W tell ftpd to use syslog's configuration syslog 2) tell syslogd to catch ftpd's log by default, syslogd will log ftpd's message to /var/adm/syslog/syslog.log and incomming/outgoing files to /var/adm/syslog/xferlog, if you want to keep them appart add/modify following lines to /etc/syslog.conf. *.info;mail.none;local5.none /var/adm/syslog/syslog.log local5.debug /var/adm/syslog/ftpd.log where local5.none tell syslog not to log ftp(local5) traffic to syslog.log local5.debug log all activities (including delete) to /var/adm/syslog/ftpd.log side note If you do not care about file deletion, you may wish to use only -i/-o in inetd.conf and /var/adm/syslog/xferlog and not bother with all of the above, which give much verbosity not always usefull.
HPUX logging ftp connection
1,497,627,506,000
How can I write ftp command for a file transfer from one server to other with a simple script?
lftp (and a lot of other ftp clients) will let you specify username, password, and the series of commands to issue with its command line. man lftp for more details.
How can I write ftp command for a file transfer from one server to other?
1,497,627,506,000
I need to open port 21 on a Linux (CentOS 5) virtual machine I have. I have tried several Google solutions, but none are working. I was wondering if someone could tell me how to do this. Below is the output of netstat -tulpn: tcp 0 0 127.0.0.1:2208 0.0.0.0:* LISTEN 3576/hpiod tcp 0 0 0.0.0.0:611 0.0.0.0:* LISTEN 3397/rpc.statd tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 3365/portmap tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 3020/cupsd tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 3629/sendmail: acce tcp 0 0 127.0.0.1:2207 0.0.0.0:* LISTEN 3582/python tcp 0 0 :::22 :::* LISTEN 3595/sshd udp 0 0 0.0.0.0:68 0.0.0.0:* 3278/dhclient udp 0 0 0.0.0.0:605 0.0.0.0:* 3397/rpc.statd udp 0 0 0.0.0.0:608 0.0.0.0:* 3397/rpc.statd udp 0 0 0.0.0.0:5353 0.0.0.0:* 3729/avahi-daemon: udp 0 0 0.0.0.0:111 0.0.0.0:* 3365/portmap udp 0 0 0.0.0.0:57333 0.0.0.0:* 3729/avahi-daemon: udp 0 0 0.0.0.0:631 0.0.0.0:* 3020/cupsd udp 0 0 192.168.201.90:123 0.0.0.0:* 3611/ntpd udp 0 0 127.0.0.1:123 0.0.0.0:* 3611/ntpd udp 0 0 0.0.0.0:123 0.0.0.0:* 3611/ntpd udp 0 0 :::5353 :::* 3729/avahi-daemon: udp 0 0 :::52217 :::* 3729/avahi-daemon: udp 0 0 fe80::20c:29ff:fe66:123 :::* 3611/ntpd udp 0 0 ::1:123 :::* 3611/ntpd udp 0 0 :::123 :::* 3611/ntpd And here is the output of iptables -L -n: Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination
I figured it out. I don't have an FTP server running on the machine I am trying to connect to.
How can I open port 21 on a Linux VM?
1,497,627,506,000
I am hoping you can help out. Our DevOps guy is out of the office at our agency and our partners need access to our FTP. We have it locked down to our office but need to open it so that people outside our office can connect for a few weeks while development from our partners complete work. Unfortunately, I have limited knowledge of the server-side stuff and am still learning. We are running centos 7 and iptables.. here are the rules we have currently: sudo iptables -L -v Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 60692 98M ACCEPT all -- any any anywhere anywhere ctstate RELATED,ESTABLISHED 138 8258 ACCEPT all -- lo any anywhere anywhere 1943 90860 INPUT_direct all -- any any anywhere anywhere 1943 90860 INPUT_ZONES_SOURCE all -- any any anywhere any where 1943 90860 INPUT_ZONES all -- any any anywhere anywhere 465 18696 DROP all -- any any anywhere anywhere ctstate INVALID 0 0 REJECT all -- any any anywhere anywhere reject-with icmp-host-prohibited Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT all -- any any anywhere anywhere ctstate RELATED,ESTABLISHED 0 0 ACCEPT all -- lo any anywhere anywhere 0 0 FORWARD_direct all -- any any anywhere anywher e 0 0 FORWARD_IN_ZONES_SOURCE all -- any any anywhere anywhere 0 0 FORWARD_IN_ZONES all -- any any anywhere anywh ere 0 0 FORWARD_OUT_ZONES_SOURCE all -- any any anywhere anywhere 0 0 FORWARD_OUT_ZONES all -- any any anywhere anyw here 0 0 DROP all -- any any anywhere anywhere ctstate INVALID 0 0 REJECT all -- any any anywhere anywhere reject-with icmp-host-prohibited Chain OUTPUT (policy ACCEPT 60733 packets, 193M bytes) pkts bytes target prot opt in out source destination 60757 193M OUTPUT_direct all -- any any anywhere anywhere Chain FORWARD_IN_ZONES (1 references) pkts bytes target prot opt in out source destination 0 0 FWDI_public all -- eth0 any anywhere anywhere [goto] 0 0 FWDI_public all -- + any anywhere anywhere [goto] Chain FORWARD_IN_ZONES_SOURCE (1 references) pkts bytes target prot opt in out source destination Chain FORWARD_OUT_ZONES (1 references) pkts bytes target prot opt in out source destination 0 0 FWDO_public all -- any eth0 anywhere anywhere [goto] 0 0 FWDO_public all -- any + anywhere anywhere [goto] Chain FORWARD_OUT_ZONES_SOURCE (1 references) pkts bytes target prot opt in out source destination Chain FORWARD_direct (1 references) pkts bytes target prot opt in out source destination Chain FWDI_public (2 references) pkts bytes target prot opt in out source destination 0 0 FWDI_public_log all -- any any anywhere anywhe re 0 0 FWDI_public_deny all -- any any anywhere anywh ere 0 0 FWDI_public_allow all -- any any anywhere anyw here 0 0 ACCEPT icmp -- any any anywhere anywhere Chain FWDI_public_allow (1 references) pkts bytes target prot opt in out source destination Chain FWDI_public_deny (1 references) pkts bytes target prot opt in out source destination Chain FWDI_public_log (1 references) pkts bytes target prot opt in out source destination Chain FWDO_public (2 references) pkts bytes target prot opt in out source destination 0 0 FWDO_public_log all -- any any anywhere anywhe re 0 0 FWDO_public_deny all -- any any anywhere anywh ere 0 0 FWDO_public_allow all -- any any anywhere anyw here Chain FWDO_public_allow (1 references) pkts bytes target prot opt in out source destination Chain FWDO_public_deny (1 references) pkts bytes target prot opt in out source destination Chain FWDO_public_log (1 references) pkts bytes target prot opt in out source destination Chain INPUT_ZONES (1 references) pkts bytes target prot opt in out source destination 1943 90860 IN_public all -- eth0 any anywhere anywhere [goto] 0 0 IN_public all -- + any anywhere anywhere [goto] Chain INPUT_ZONES_SOURCE (1 references) pkts bytes target prot opt in out source destination Chain INPUT_direct (1 references) pkts bytes target prot opt in out source destination Chain IN_public (2 references) pkts bytes target prot opt in out source destination 1943 90860 IN_public_log all -- any any anywhere anywhere 1943 90860 IN_public_deny all -- any any anywhere anywher e 1943 90860 IN_public_allow all -- any any anywhere anywhe re 0 0 ACCEPT icmp -- any any anywhere anywhere Chain IN_public_allow (1 references) pkts bytes target prot opt in out source destination 1 52 ACCEPT tcp -- any any anywhere anywhere tcp dpt:ssh ctstate NEW 498 22176 ACCEPT tcp -- any any anywhere anywhere tcp dpt:http ctstate NEW 979 49936 ACCEPT tcp -- any any anywhere anywhere tcp dpt:https ctstate NEW 0 0 ACCEPT tcp -- any any anywhere anywhere tcp dpt:ftp ctstate NEW 0 0 ACCEPT tcp -- any any anywhere anywhere tcp dpt:webcache ctstate NEW 0 0 ACCEPT tcp -- any any anywhere anywhere tcp dpts:ndmps:50000 ctstate NEW 0 0 ACCEPT tcp -- any any anywhere anywhere tcp dpt:ftp ctstate NEW 0 0 ACCEPT tcp -- any any anywhere anywhere tcp dpt:ftp-data ctstate NEW Chain IN_public_deny (1 references) pkts bytes target prot opt in out source destination Chain IN_public_log (1 references) pkts bytes target prot opt in out source destination Chain OUTPUT_direct (1 references) pkts bytes target prot opt in out source destination 24 1224 ACCEPT tcp -- any any anywhere anywhere tcp dpt:smtp
Your CentOS system is using Firewalld, so you'd want to update the firewall configuration using it. You'd use the firewall-cmd command to modify the rules. For example, to allow the FTP service on the public zone, you'd use: # firewall-cmd --add-service=ftp --zone=public You can see that there are rules already defined with firewalld in the output of your iptables command, which is why you see all those tables with '_public' in their name.
Need help with firewall - Accept FTP connections from outside
1,497,627,506,000
I am trying to transfer files from a remote server to my local machine using FTP. I am ssh'd on to the remote server and want to connect to my local machine. The remote server runs on a Linux operating system whereas my local machine runs on a Windows operating system. I know how to transfer files to and from the remote server from my local machine but I am confused as to how to transfer files to and from the local machine from the remote server. How do I do this?
If you know how to transfer the files when initiating the transfer from your local machine, then just open another window and do it. I should point out that using FTP for file transfers is insecure. FTP also doesn't work well with NAT routers. If your need to initiate the transfer from the remote side, create a reverse tcp tunnel with SSH and use the to connect to your local machine and transfer the files.
How to transfer files from remote server to local machine when ssh'd in to remote server
1,497,627,506,000
I am working on a linux server remotely. Are there any command that can allow me to figure out the IP address of this linux server, so that I can ftp some files to this server.
if the remote machine is directly connected to the internet: hostname -I|cut -f1 otherwise, one of these: wget -qO - http://whatsmyip.me/ wget -qO - http://ipinfo.io/ip wget -qO - http://ipecho.net/plain; echo In all cases, must be run on the remote machine.
check ip address of a linux server to upload the file [duplicate]
1,497,627,506,000
What is the method to block a machine from establishing connection to an outside ftp server. Both ftp and sftp. inet, iptables, shutdown service?
I'm guessing, but it looks like you want to block connections to port 21 and port 22 . This can be done on the host itself for ftp iptables -I OUTPUT -p tcp --dport 21 \ -d the.rem.ote.ip \ -m comment --comment "blocked as per ticket##" \ -j REJECT SFTP, though, is tricky: it shares a port with ssh. If you are okay with blocking the outgoing ssh connections too, then ^21^22 after that last command. If you need to keep ssh and block sftp, you'd need to condition the remote end to never offer any sftp subcomment; or change your local config (not sure how) to prevent sftp of any kind (the challenge being users can download sftp binaries at any time). Blocking port 21 and 22 on an intervening firewall you control is much more reliable, but that's not going to be a centos/rhel issue that you'd need to ask here. I think if you can't block all port-22 access via firewalls, then you're in for a rough time.
How to stop outbound ftp from being established. centos/ rhel
1,447,875,240,000
I'm new to bash script, I need to create sub folders under each directory containing specific name in ftp server. eg: A1/B1/Name1 | |_C1 |_C2 A1/B1/Name2 | |_C1 |_C2 A1/B1/Name3 | |_C4 |_C5 A1/B1/Name4 | |_C1 |_C2 My main directory is A1/B1 where I have Name1,2,3,4 subdirectories that has sub folders C1,C2. I need to find which directory has C1 and C2 subfolder and create CX subfolder in all directories that has c1 and c2 recursively
#! /bin/sh - cd A1/B1 || exit ret=0 for dir in */; do if [ -d "${dir}C1" ] && [ -d "${dir}C2" ]; then mkdir -p -- "${dir}CX" || ret=$? fi done exit "$ret"
bash script that find the specific folders in sub directory and create directory in all directories recursively
1,447,875,240,000
I would have an exam in few days about basic stuff on linux(I'm not familiar with linux therefore I'm studying for it). There would be two linux machines VM1 and VM2 on VMware, that connect via ethernet. I created on my PC this environment to prepare myself (two linux vms on vmware). the first task is to ping between both machines-that's easy.in the second task they asked to run some /start_ptpdis it built in linux command? or is it mean to run some external program?
The exercise you posted must be run on a lab that is already set up, with VM_2 containing a FTP server and a custom shell script start_ftp that starts the FTP server. It won't work on a random Linux machine. If you don't have access to such a lab, you can install any FTP server via the package manager of your Linux distribution, then try to catch up with the exercises. But the best would be to ask your instructor.
FTP to download file from one linux to another
1,328,908,576,000
I know that shell scripts just run commands as if they were executed in at the command prompt. I'd like to be able to run shell scripts as if they were functions... That is, taking an input value or string into the script. How do I approach doing this?
The shell command and any arguments to that command appear as numbered shell variables: $0 has the string value of the command itself, something like script, ./script, /home/user/bin/script or whatever. Any arguments appear as "$1", "$2", "$3" and so on. The count of arguments is in the shell variable "$#". Common ways of dealing with this involve shell commands getopts and shift. getopts is a lot like the C getopt() library function. shift moves the value of $2 to $1, $3 to $2, and so on; $# gets decremented. Code ends up looking at the value of "$1", doing things using a case…esac to decide on an action, and then doing a shift to move $1 to the next argument. It only ever has to examine $1, and maybe $#.
How can I pass a command line argument into a shell script?
1,328,908,576,000
Noone should need 10 years for asking this question, like I did. If I were just starting out with Linux, I'd want to know: When to alias, when to script and when to write a function? Where aliases are concerned, I use aliases for very simple operations that don't take arguments. alias houston='cd /home/username/.scripts/' That seems obvious. But some people do this: alias command="bash bashscriptname" (and add it to the .bashrc file). Is there a good reason to do that? I didn't come across a circumstance for this. If there is an edge case where that would make a difference, please answer below. That's where I would just put something in my PATH and chmod +x it, which is another thing that came after years of Linux trial-and-error. Which brings me to the next topic. For instance, I added a hidden folder (.scripts/) in the home directory to my PATH by just adding a line to my .bashrc (PATH=$PATH:/home/username/.scripts/), so anything executable in there automagically autocompletes. I don't really need that, do I? I would only use that for languages which are not the shell, like Python. If it's the shell, I can just write a function inside the very same .bashrc: funcname () { somecommand -someARGS "$@" } Did I miss anything? What would you tell a beginning Linux user about when to alias, when to script and when to write a function? If it's not obvious, I'm assuming the people who answer this will make use of all three options. If you only use one or two of these three (aliases, scripts, functions), this question isn't really aimed at you.
An alias should effectively not (in general) do more than change the default options of a command. It is nothing more than simple text replacement on the command name. It can't do anything with arguments but pass them to the command it actually runs. So if you simply need to add an argument at the front of a single command, an alias will work. Common examples are # Make ls output in color by default. alias ls="ls --color=auto" # make mv ask before overwriting a file by default alias mv="mv -i" A function should be used when you need to do something more complex than an alias but that wouldn't be of use on its own. For example, take this answer on a question I asked about changing grep's default behavior depending on whether it's in a pipeline: grep() { if [[ -t 1 ]]; then command grep -n "$@" else command grep "$@" fi } It's a perfect example of a function because it is too complex for an alias (requiring different defaults based on a condition), but it's not something you'll need in a non-interactive script. If you get too many functions or functions too big, put them into separate files in a hidden directory, and source them in your ~/.bashrc: if [ -d ~/.bash_functions ]; then for file in ~/.bash_functions/*; do . "$file" done fi A script should stand on its own. It should have value as something that can be re-used, or used for more than one purpose.
In Bash, when to alias, when to script and when to write a function?
1,328,908,576,000
source some_file some_file: doit () { echo doit $1 } export TEST=true If I source some_file the function "doit" and the variable TEST are available on the command line. But running this script: script.sh: #/bin/sh echo $TEST doit test2 Will return the value of TEST, but will generate an error about the unknown function "doit". Can I "export" the function, too, or do I have to source some_file in script.sh to use the function there?
In Bash you can export function definitions to other shell scripts that your script calls with export -f function_name For example you can try this simple example: ./script1: #!/bin/bash myfun() { echo "Hello!" } export -f myfun ./script2 ./script2: #!/bin/bash myfun Then if you call ./script1 you will see the output Hello!.
Can I "export" functions in bash?
1,328,908,576,000
I can define bash functions using or omitting the function keyword. Is there any difference? #!/bin/bash function foo() { echo "foo" } bar() { echo "bar" } foo bar Both calls to functions foo and bar succeed and I can't see any difference. So I am wondering if it is just to improve readability, or there is something that I am missing... BTW in other shells like dash (/bin/sh is symlinked to dash in debian/ubuntu) it fails when using the function keyword.
There is no difference AFAIK, other than the fact that the second version is more portable.
difference between "function foo() {}" and "foo() {}"
1,328,908,576,000
At work, I write bash scripts frequently. My supervisor has suggested that the entire script be broken into functions, similar to the following example: #!/bin/bash # Configure variables declare_variables() { noun=geese count=three } # Announce something i_am_foo() { echo "I am foo" sleep 0.5 echo "hear me roar!" } # Tell a joke walk_into_bar() { echo "So these ${count} ${noun} walk into a bar..." } # Emulate a pendulum clock for a bit do_baz() { for i in {1..6}; do expr $i % 2 >/dev/null && echo "tick" || echo "tock" sleep 1 done } # Establish run order main() { declare_variables i_am_foo walk_into_bar do_baz } main Is there any reason to do this other than "readability", which I think could be equally well established with a few more comments and some line spacing? Does it make the script run more efficiently (I would actually expect the opposite, if anything), or does it make it easier to modify the code beyond the aforementioned readability potential? Or is it really just a stylistic preference? Please note that although the script doesn't demonstrate it well, the "run order" of the functions in our actual scripts tends to be very linear -- walk_into_bar depends on stuff that i_am_foo has done, and do_baz acts on stuff set up by walk_into_bar -- so being able to arbitrarily swap the run order isn't something we would generally be doing. For example, you wouldn't suddenly want to put declare_variables after walk_into_bar, that would break things. An example of how I would write the above script would be: #!/bin/bash # Configure variables noun=geese count=three # Announce something echo "I am foo" sleep 0.5 echo "hear me roar!" # Tell a joke echo "So these ${count} ${noun} walk into a bar..." # Emulate a pendulum clock for a bit for i in {1..6}; do expr $i % 2 >/dev/null && echo "tick" || echo "tock" sleep 1 done
I've started using this same style of bash programming after reading Kfir Lavi's blog post "Defensive Bash Programming". He gives quite a few good reasons, but personally I find these the most important: procedures become descriptive: it's much easier to figure out what a particular part of code is supposed to do. Instead of wall of code, you see "Oh, the find_log_errors function reads that log file for errors ". Compare it with finding whole lot of awk/grep/sed lines that use god knows what type of regex in the middle of a lengthy script - you've no idea what's it doing there unless there's comments. you can debug functions by enclosing into set -x and set +x. Once you know the rest of the code works alright , you can use this trick to focus on debugging only that specific function. Sure, you can enclose parts of script, but what if it's a lengthy portion ? It's easier to do something like this: set -x parse_process_list set +x printing usage with cat <<- EOF . . . EOF. I've used it quite a few times to make my code much more professional. In addition, parse_args() with getopts function is quite convenient. Again, this helps with readability, instead of shoving everything into script as giant wall of text. It's also convenient to reuse these. And obviously, this is much more readable for someone who knows C or Java, or Vala, but has limited bash experience. As far as efficiency goes, there's not a lot of what you can do - bash itself isn't the most efficient language and people prefer perl and python when it comes to speed and efficiency. However, you can nice a function: nice -10 resource_hungry_function Compared to calling nice on each and every line of code, this decreases whole lot of typing AND can be conveniently used when you want only a part of your script to run with lower priority. Running functions in background, in my opinion, also helps when you want to have whole bunch of statements to run in background. Some of the examples where I've used this style: https://askubuntu.com/a/758339/295286 https://askubuntu.com/a/788654/295286 https://github.com/SergKolo/sergrep/blob/master/chgreeterbg.sh
Why write an entire bash script in functions?
1,328,908,576,000
When I define a new alias in .bash_aliases file or a new function in .bashrc file, is there some refresh command to be able immediately use the new aliases or functions without closing the terminal (in my case xfce4-terminal with a few tabs open, many files open and in the middle of the work)?
Sourcing the changed file will provide access to the newly written alias or function in the current terminal, for example: source ~/.bashrc An alternative syntax: . ~/.bashrc Note that if you have many instances of bash running in your terminal (you mentionned multiple tabs), you will have to run this in every instance.
Refresh aliases and functions after defining new aliases and functions?
1,328,908,576,000
After reading ilkkachu's answer to this question I learned on the existence of the declare (with argument -n) shell built in. help declare brings: Set variable values and attributes. Declare variables and give them attributes. If no NAMEs are given, display the attributes and values of all variables. -n ... make NAME a reference to the variable named by its value I ask for a general explanation with an example regarding declare because I don't understand the man. I know what is a variable and expanding it but I still miss the man on declare (variable attribute?). Maybe you'd like to explain this based on the code by ilkkachu in the answer: #!/bin/bash function read_and_verify { read -p "Please enter value for '$1': " tmp1 read -p "Please repeat the value to verify: " tmp2 if [ "$tmp1" != "$tmp2" ]; then echo "Values unmatched. Please try again."; return 2 else declare -n ref="$1" ref=$tmp1 fi }
In most cases it is enough with an implicit declaration in bash asdf="some text" But, sometimes you want a variable's value to only be integer (so in case it would later change, even automatically, it could only be changed to an integer, defaults to zero in some cases), and can use: declare -i num or declare -i num=15 Sometimes you want arrays, and then you need declare declare -a asdf # indexed type or declare -A asdf # associative type You can find good tutorials about arrays in bash when you browse the internet with the search string 'bash array tutorial' (without quotes), for example linuxconfig.org/how-to-use-arrays-in-bash-script I think these are the most common cases when you declare variables. Please notice also, that in a function, declare makes the variable local (in the function) without any name, it lists all variables (in the active shell) declare Finally, you get a brief summary of the features of the shell built-in command declare in bash with the command help declare
What is "declare" in Bash?
1,328,908,576,000
So I started using zsh. I like it all right. It seems very cool and slick, and the fact that the current working directory and actual command line are on different lines is nice, but at the same time, I'm noticing that zsh can be a bit slower than bash, especially when printing text to the screen. The thing I liked best was the fact that zsh was 'backward compatible' with all of the functions I defined in my .bashrc. One gripe though. The functions all work perfectly, but I can't figure out how the exporting system works. I had some of those .bashrc functions exported so that I could use them elsewhere, such as in scripts and external programs, with export -f. In zsh, exporting doesn't seem to even be talked about. Is it autoloading? Are those two things the same? I'm having a seriously hard time figuring that out.
Environment variables containing functions are a bash hack. Zsh doesn't have anything similar. You can do something similar with a few lines of code. Environment variables contain strings; older versions of bash, before Shellshock was discovered, stored the function's code in a variable whose name is that of the function and whose value is () { followed by the function's code followed by }. You can use the following code to import variables with this encoding, and attempt to run them with bash-like settings. Note that zsh cannot emulate all bash features, all you can do is get a bit closer (e.g. to make $foo split the value and expand wildcards, and make arrays 0-based). bash_function_preamble=' emulate -LR ksh ' for name in ${(k)parameters}; do [[ "-$parameters[name]-" = *-export-* ]] || continue [[ ${(P)name} = '() {'*'}' ]] || continue ((! $+builtins[$name])) || continue functions[$name]=$bash_function_preamble${${${(P)name}#"() {"}%"}"} done (As Stéphane Chazelas, the original discoverer of Shellshock, noted, an earlier version of this answer could execute arbitrary code at this point if the function definition was malformed. This version doesn't, but of course as soon as you execute any command, it could be a function imported from the environment.) Post-Shellshock versions of bash encode functions in the environment using invalid variable names (e.g. BASH_FUNC_myfunc%%). This makes them harder to parse reliably as zsh doesn't provide an interface to extract such variable names from the environment. I don't recommend doing this. Relying on exported functions in scripts is a bad idea: it creates an invisible dependency in your script. If you ever run your script in an environment that doesn't have your function (on another machine, in a cron job, after changing your shell initialization files, …), your script won't work anymore. Instead, store all your functions in one or more separate files (something like ~/lib/shell/foo.sh) and start your scripts by importing the functions that it uses (. ~/lib/shell/foo.sh). This way, if you modify foo.sh, you can easily search which scripts are relying on it. If you copy a script, you can easily find out which auxiliary files it needs. Zsh (and ksh before it) makes this more convenient by providing a way to automatically load functions in scripts where they are used. The constraint is that you can only put one function per file. Declare the function as autoloaded, and put the function definition in a file whose name is the name of the function. Put this file in a directory listed in $fpath (which you may configure through the FPATH environment variable). In your script, declare autoloaded functions with autoload -U foo. Furthermore zsh can compile scripts, to save parsing time. Call zcompile to compile a script. This creates a file with the .zwc extension. If this file is present then autoload will load the compiled file instead of the source code. You can use the zrecompile function to (re)compile all the function definitions in a directory.
what is the zsh equivalent of bash's export -f
1,328,908,576,000
This is my code #!/bin/bash showword() { echo $1 } echo This is a sample message | xargs -d' ' -t -n1 -P2 showword So I have a function showword which echoes whatever string you pass as a parameter to the function. Then I have xargs trying to call the function and pass one word at a time to the function, and run 2 copies of the function in parallel. The thing that is not working is xargs doesn't recognize the function. How can I achieve what I am trying to do, how can I make xargs work with the function showword?
Try exporting the function, then calling it in a subshell: showword() { echo $1 } export -f showword echo This is a sample message | xargs -d' ' -t -n1 -P2 bash -c 'showword "$@"' _ This causes xargs to execute bash -c 'showword "$@"' _ This bash -c 'showword "$@"' _ is bash -c 'showword "$@"' _ a ︙ The arguments passed to the bash command are, well, passed into the bash environment, but starting from 0.  So, inside the function, $0 is “_” and $1 is “This” $0 is “_” and $1 is “is” $0 is “_” and $1 is “a”       ︙ See Bash -c with positional parameters. Note that export -f works only in Bash, and -Pn (--max-procs=max-procs) works only in GNU xargs.
How to use defined function with xargs
1,328,908,576,000
I have a script that does a number of different things, most of which do not require any special privileges. However, one specific section, which I have contained within a function, needs root privileges. I don't wish to require the entire script to run as root, and I want to be able to call this function, with root privileges, from within the script. Prompting for a password if necessary isn't an issue since it is mostly interactive anyway. However, when I try to use sudo functionx, I get: sudo: functionx: command not found As I expected, export didn't make a difference. I'd like to be able to execute the function directly in the script rather than breaking it out and executing it as a separate script for a number of reasons. Is there some way I can make my function "visible" to sudo without extracting it, finding the appropriate directory, and then executing it as a stand-alone script? The function is about a page long itself and contains multiple strings, some double-quoted and some single-quoted. It is also dependent upon a menu function defined elsewhere in the main script. I would only expect someone with sudo ANY to be able to run the function, as one of the things it does is change passwords.
I will admit that there's no simple, intuitive way to do this, and this is a bit hackey. But, you can do it like this: function hello() { echo "Hello!" } # Test that it works. hello FUNC=$(declare -f hello) sudo bash -c "$FUNC; hello" Or more simply: sudo bash -c "$(declare -f hello); hello" It works for me: $ bash --version GNU bash, version 4.3.42(1)-release (x86_64-apple-darwin14.5.0) $ hello Hello! $ $ FUNC=$(declare -f hello) $ sudo bash -c "$FUNC; hello" Hello! Basically, declare -f will return the contents of the function, which you then pass to bash -c inline. If you want to export all functions from the outer instance of bash, change FUNC=$(declare -f hello) to FUNC=$(declare -f). Edit To address the comments about quoting, see this example: $ hello() > { > echo "This 'is a' test." > } $ declare -f hello hello () { echo "This 'is a' test." } $ FUNC=$(declare -f hello) $ sudo bash -c "$FUNC; hello" Password: This 'is a' test.
Executing a Bash Script Function with Sudo
1,328,908,576,000
I'm on Solaris 10 and I have tested the following with ksh (88), bash (3.00) and zsh (4.2.1). The following code doesn't yield any result: function foo { echo "Hello World" } find somedir -exec foo \; The find does match several files (as shown by replacing -exec ... with -print), and the function works perfectly when called outside from the find call. Here's what the man find page say about -exec: -exec command True if the executed command returns a zero value as exit status. The end of command must be punctuated by an escaped semicolon (;). A command argument {} is replaced by the current pathname. If the last argument to -exec is {} and you specify + rather than the semicolon (;), the command is invoked fewer times, with {} replaced by groups of pathnames. If any invocation of the command returns a non-zero value as exit status, find returns a non-zero exit status. I could probably get away doing something like this: for f in $(find somedir); do foo done But I'm afraid of dealing with field separator issues. Is it possible to call a shell function (defined in the same script, let's not bother with scoping issues) from a find ... -exec ... call? I tried it with both /usr/bin/find and /bin/find and got the same result.
A function is local to a shell, so you'd need find -exec to spawn a shell and have that function defined in that shell before being able to use it. Something like: find ... -exec ksh -c ' function foo { echo blan: "$@" } foo "$@"' ksh {} + bash allows one to export functions via the environment with export -f, so you can do (in bash): foo() { ...; } export -f foo find ... -exec bash -c 'foo "$@"' bash {} + ksh88 has typeset -fx to export function (not via the environment), but that can only be used by she-bang less scripts executed by ksh, so not with ksh -c. Another option is to do: find ... -exec ksh -c " $(typeset -f foo)"' foo "$@"' ksh {} + That is, use typeset -f to dump the definition of the foo function inside the inline script. Note that if foo uses other functions, you'll also need to dump them as well. Or instead of passing the function definition on the command line (which would be visible in the output of ps -f for instance), you can pass it via an environment variable: FUNCDEFS=$(typeset -f foo) find ... -exec ksh -c ' eval "$FUNCDEFS" && unset -v FUNCDEFS && foo "$@"' ksh {} + (the unset -v FUNCDEFS to avoid polluting the environment of commands started by that foo function if any).
Executing user defined function in a find -exec call