date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,452,599,967,000
I have this file structure: > APPLICATION 1 >> CONTROLLER (@JuniorProgrammers) >> MODELS (symlink to SHARED/MODELS) >> VIEWS (@Designers) > APPLICATION 2 >> CONTROLLER (@JuniorProgrammers) >> MODELS (symlink to SHARED/MODELS) >> VIEWS (@Designers) > SHARED >> MODELS (@SeniorProgrammers) I need php to be able to read Folder 1.1 contents, but programmers who FTP into Folder 1 won't be able to read the symlink (they CAN see the symlink, but not FOLLOW it including all reads, and writes). The @ is the groups of users that have read/write access of each layer.
Symlinks themselves have 777 because in Unix, file security is judged on a file/inode basis. If it's the same data they're operating on, it should have the same security conditions, regardless of the name you gave the system to open it. [root@hypervisor test]# ls -l total 0 lrwxrwxrwx. 1 root root 10 Jun 8 16:01 symTest -> /etc/fstab [root@hypervisor test]# chmod o-rwx symTest [root@hypervisor test]# ls -l total 0 lrwxrwxrwx. 1 root root 10 Jun 8 16:01 symTest -> /etc/fstab [root@hypervisor test]# :-( Since permissions are set on the inode, it won't work with hard links even: [root@hypervisor test]# echo "Don't Test Me, Bro" > testing123 [root@hypervisor test]# ls -l total 4 lrwxrwxrwx. 1 root root 10 Jun 8 16:01 symTest -> /etc/fstab -rw-r--r--. 1 root root 19 Jun 8 16:06 testing123 [root@hypervisor test]# ln testing123 newHardLink [root@hypervisor test]# ls -l total 8 -rw-r--r--. 2 root root 19 Jun 8 16:06 newHardLink lrwxrwxrwx. 1 root root 10 Jun 8 16:01 symTest -> /etc/fstab -rw-r--r--. 2 root root 19 Jun 8 16:06 testing123 [root@hypervisor test]# chmod 770 testing123 [root@hypervisor test]# chmod 700 newHardLink [root@hypervisor test]# ls -lh total 8.0K -rwx------. 2 root root 19 Jun 8 16:06 newHardLink lrwxrwxrwx. 1 root root 10 Jun 8 16:01 symTest -> /etc/fstab -rwx------. 2 root root 19 Jun 8 16:06 testing123 A symlink isn't an inode (which actually stores the data you're wanting to secure) ergo in the Unix model it complicates things by having two different sets of permissions for protecting the same data. It sounds like this is an attempt to give different groups of people different levels of access rights to the same file. If that's the case you're actually supposed to use POSIX ACL's (via setfacl and getfacl) to give the files appropriate permissions on the target of the symlink. EDIT: To elaborate on the direction you're probably wanting to go in, it's something like: # setfacl -m u:apache:r-- "Folder 2.1" # setfacl -m g:groupOfProgrammers:--- "Folder 2.1" # setfacl -m g:groupOfProgrammers:r-x "Folder 1" The above gives the apache user (substitute with whatever user your apache/nginx/whatever is running as) read-only access to the target of the symlink, and gives groupOfProgrammers read access to the directory the symlink is in (so that groupOfProgrammers can get a complete directory listing there), but turns all permission bits off for the same target of the symlink.
How Do I Block Read Access to a Symbolic Link?
1,452,599,967,000
Possible Duplicate: How to delete the oldest file over FTP using CURL? I need to connect to a remote server using a bash script and then delete all files within the directory that are older than 7 days. I'm trying to do this with a for loop within the FTP remote server and I get an error. #!/bin/sh HOST='www.ftpserver.com' USER='username' PASSWD='password' ftp -n $HOST <<END_SCRIPT user ${USER} ${PASSWD} for i in {ls} do echo "$i" done quit END_SCRIPT exit 0 The error I get is We only support non-print format, sorry. ?Invalid command ?Invalid command Any pointers will help
Either rewrite your script in: Perl, PHP or Python, as they all offer API's for FTP, else have a look at using expect. There are several tutorials around.
How to loop through each file in FTP directory using a bash script [duplicate]
1,452,599,967,000
I need to transfer a large number of files from an FTP server to a new server. There could be thousands of files so I would like to limit it to files that were uploaded in the last three months - is that possible and if so how? Also is it possible to find out how big the download is likely to be before you start the actually download? Thanks
You can use lftp for that, utilizing its mirror command. Here's a snip from the manpage: mirror [OPTS] [source [target]] Mirror specified source directory to local target directory. If target directory ends with a slash, the source base name is appended to target directory name. Source and/or target can be URLs pointing to directories. [cut...] -N, --newer-than=SPEC download only files newer than specified time --on-change=CMD execute the command if anything has been changed --older-than=SPEC download only files older than specified time [...] Definitely have a look at the manual, as there are really many useful options to mirror - like --allow-chown, --allow-suid or --parallel[=N] for example. Lftp also works with other access protocols, like sftp, fish or http(s).
Is it possible to transfer files in a date range via FTP
1,452,599,967,000
I've created a user called "ftp-acc" and limited it to a single directory using VSFTPD. When I login to the account in Filezilla using FTP, it is successfully restricted to a single directory. However, when logging in with the same credentials using SFTP, the user can access other directories as well. How can I disable SFTP for the user "ftp-acc"?
First point , FTP and SFTP both are different. FTP normal file transfer protocol but SFTP is secure file transfer protocol and this service is from SSH not a stand alone service. If you want to disable SFTP for one user then open sshd_config file in server(machine you are trying to access) and DenyUsers ftp-acc and then restart sshd service with service sshd restart
How to disable SFTP for a user, but keep FTP enabled
1,452,599,967,000
I want to download multiple files from a FTP server (Android app). I used FTP client and mget command (Ubuntu Terminal). But it prompts me to enter y or n for every file that I want to download. I have 1000 files to download. I can not enter y for 1000 times to download 1000 files. I am searching for a solution to do the same work easily. What I tried: user1@system ~ $ ftp ftp> open 192.168.43.1 2221 Connected to 192.168.43.1. 220 Service ready for new user. Name (192.168.43.1:dipankar): android 331 User name okay, need password for android. Password: 230 User logged in, proceed. Remote system type is UNIX. ftp> cd /storage/ABC5-1DF1/DCIM/Camera/ 250 Directory changed to /storage/ABC5-1DF1/DCIM/Camera ftp> mget * mget Aqua Ring_20180113_105853.jpg? y 200 Command PORT okay. 150 File status okay; about to open data connection. 226 Transfer complete. 361166 bytes received in 0.08 secs (4.0927 MB/s) mget Aqua Ring_20180113_110130.jpg? y Solution: user1@system ~ $ wget -r ftp://username:[email protected]:2221/storage/ABC5-1DF1/DCIM/Camera/
In interactive ftp mode you can use prompt before mget * This will assume y to all question. This feature of ftp has been around since "invention" of ftp.
How to download multiple files at a time using mget command from FTP Server without pressing y everytime?
1,452,599,967,000
What's the difference between anonymous and guest logins in vsftpd? Both can be enabled/disabled: anonymous_enable= guest_enable= Both are mapped to a different username: ftp_username= guest_username= Pretty much everything what I know is true for anonymous can be applied to guest. Then why guest exists if anonymous seems good enough? EDIT Please consider the following vsftpd config. On the left hand side I have anonymous config, on the right guest. Apart those 3 lines the rest of the config is coherent. # Standalone mode # Standalone mode listen=YES listen=YES # Access rights # Access rights anon_root=/var/ftp anon_root=/var/ftp download_enable=YES download_enable=YES anonymous_enable=YES | guest_enable=YES local_enable=NO local_enable=NO ftp_username=ftp | guest_username=ftp2 # Upload Access rights # Upload Access rights write_enable=YES write_enable=YES anon_mkdir_write_enable=YES anon_mkdir_write_enable=YES anon_other_write_enable=NO anon_other_write_enable=NO anon_upload_enable=YES anon_upload_enable=YES delete_failed_uploads=YES delete_failed_uploads=YES # Security # Security anon_world_readable_only=YES anon_world_readable_only=YES connect_from_port_20=YES connect_from_port_20=YES hide_ids=YES hide_ids=YES ls_recurse_enable=NO ls_recurse_enable=NO tilde_user_enable=NO tilde_user_enable=NO pasv_min_port=50000 pasv_min_port=50000 pasv_max_port=60000 pasv_max_port=60000 # Features # Features ftpd_banner=Welcome Anonymou | ftpd_banner=Welcome Guest !! no_anon_password=YES no_anon_password=YES xferlog_enable=YES xferlog_enable=YES User experience with anonymous enabled: $ lftp -d 127.0.0.1 ---- Resolving host address... ---- 1 address found: 127.0.0.1 lftp 127.0.0.1:~> ls ---- Connecting to 127.0.0.1 (127.0.0.1) port 21 <--- 220 Welcome Anonymous !! ---> FEAT <--- 211-Features: <--- EPRT <--- EPSV <--- MDTM <--- PASV <--- REST STREAM <--- SIZE <--- TVFS <--- 211 End ---> USER anonymous <--- 230 Login successful. ---> PWD <--- 257 "/" is the current directory ---> EPSV <--- 229 Entering Extended Passive Mode (|||52743|) ---- Connecting data socket to (127.0.0.1) port 52743 ---- Data connection established ---> LIST <--- 150 Here comes the directory listing. ---- Got EOF on data connection ---- Closing data socket drwxrwxr-x 2 ftp ftp 4096 Mar 16 13:21 upload drwxr-xr-x 2 ftp ftp 4096 Mar 16 13:30 vagrant <--- 226 Directory send OK. lftp 127.0.0.1:/> exit ---> QUIT <--- 221 Goodbye. ---- Closing control socket User experience with guest enabled: $ lftp -d 127.0.0.1 ---- Resolving host address... ---- 1 address found: 127.0.0.1 lftp 127.0.0.1:~> ls ---- Connecting to 127.0.0.1 (127.0.0.1) port 21 <--- 220 Welcome Guest !! ---> FEAT <--- 211-Features: <--- EPRT <--- EPSV <--- MDTM <--- PASV <--- REST STREAM <--- SIZE <--- TVFS <--- 211 End ---> USER anonymous <--- 230 Login successful. ---> PWD <--- 257 "/" is the current directory ---> EPSV <--- 229 Entering Extended Passive Mode (|||51032|) ---- Connecting data socket to (127.0.0.1) port 51032 ---- Data connection established ---> LIST <--- 150 Here comes the directory listing. ---- Got EOF on data connection ---- Closing data socket drwxrwxr-x 2 ftp ftp 4096 Mar 16 13:21 upload drwxr-xr-x 2 ftp ftp 4096 Mar 16 13:30 vagrant <--- 226 Directory send OK. lftp 127.0.0.1:/> exit ---> QUIT <--- 221 Goodbye. ---- Closing control socket As far as I can tell my user experience is not different regardless the configuration.
This quote describes it: guest_enable If enabled, all non-anonymous logins are classed as "guest" logins. A guest login is remapped to the user specified in the guest_username setting. Anonymous access is intended mainly for providing access to public files to everybody. Guests need a login/password. Guest feature limits the access only to a group of people (e.g. company employees, or registered customers).
anonymous vs. guest logins in vsftpd?
1,452,599,967,000
I need to transfer files from my ftp to another, is there any tool to put two access and I transfer from one to another without downloading the files to my machine?
You can use a client that supports the FXP protocol, as described in one of the answers from this webmaster.stackexchange.com Q&A: How can I transfer files from one server to another server using FTP The following is from the SmartFTP Knowledge base: excerpt What Is FXP? FXP stands for File eXchange Protocol. It lets you copy files from one FTP-server to another using an FXP-client. Normally you transfer files using the FTP protocol between your machine and a FTP-server, and the maximum transfer speed depends on the speed of your Internet connection (e.g. 56k, cable or T1). When transferring files between two remote servers using an FXP client, the maximum transfer speed does not depend on your connection but only on the connection between the two servers, which is usually much faster than your own connection. Because it is a direct connection you will not be able to see the progress or the transfer speed of the files. 2 such clients that support this are SmartFTP and CuteFTP. excerpt Restrictions Both FTP servers must support FXP and have it enabled. Please consult with the server administrator since most FTP servers do not support FXP, or have FXP disabled due to potential security risks. One server has to support PASV mode and the other server must allow PORT commands from a foreign address. The client logs in to both servers and then it arranges for a file transfer by telling one server that it will be a passive transfer and the other that it will be an active transfer, see example. excerpt Example The FTP client tells the destination FTP server to listen for a connection by sending a "PASV" command. The source FTP server connects to the data port reported by the destination server (after a successful PASV command). The client then passes the address/port in a "PORT" command to the destination server. Thus all the data goes directly from the source to the destination FTP server. Both servers only report status messages on fail/success to the FTP client. You can transfer files from one remote server to another using SmartFTP by opening a remote server in each window and then dragging and dropping. References Knowledge Base - Home > What is ... > What Is FXP?
how to transfer files between two ftps
1,452,599,967,000
I'd been seeing that * isn't interpreted on ftp or lftp. Suppose I want to change a directory from current to say ./japan In ftp> of lftp>, if I give: $ cd jap* it would show this error: No such file or directory So, I'm forced to give the complete name: $ cd japan EDIT #1 @thomas, @gold: Thank you for your valuable information. As * isn't interpreted with all commands (like cd), is there any way so that I can get away from typing every time the complete file name.
If you're using lftp you can use the Tab key to do path completion similar to the same method used when in a shell such as Bash or Zsh. If you continue to hit Tab as you type it will complete as much of what matches. You can continue to type additional characters from the results of what's left that matches what you're typed thus far. Example Initially after connecting to an SFTP server. lftp me@sftpserver:~> pwd sftp://me@sftpserver/home/me It I type cd u and then hit Tab one time it will complete this: lftp me@sftpserver:~> cd upload/ If I hit it a 2nd time: lftp me@sftpserver:~> cd upload/ 2011-07-12/ a/ If I type a 2 and hit Tab another time it will complete like this: lftp me@sftpserver:~> cd upload/2011-07-12/ At which point if you hit Enter it will run the above cd command.
* not interpreted in ftp, lftp?
1,452,599,967,000
I have this folder full of files. Once per day, I want the newest of the files to be FTPed automatically to a file server.
Make a short script, get the filename via this line: newestfilename=`ls -t $dir| head -1` (assuming $dir is the directory you're interested in), then feed $filename to your FTP command, and of course, cron this script to run once a day. If you have ncftp, you can use the following command to ftp the file: ncftpput -Uftpuser -Pftppasswd ftphost /remote/path $dir/$newestfilename Without ncftp, this may work: ftp -u ftp://username:[email protected]/path/to/remote_file $dir/$newestfilename
What is a reliable way to automate FTP upload of 'the newest file in directory X'?
1,452,599,967,000
I want to backup a FTP directory (not backup to) with a cron script running daily. I'd prefer a solution that could sync the FTP to my local computer; only copy files changed and remove files that have been deleted. Is there such an application?
You can use curlftpfs and rsync to accomplish what you want. curlftpfs is a FUSE filesystem that will let you mount a remote ftp location as a normal filesystem. Once it's mounted, you can use rsync to sync the mount with a local copy.
Incrementally backup FTP to local computer
1,452,599,967,000
I'm trying to get a ProFTPD to LDAP auth on a Active Directory base. I still couldn't figure out what could be wrong with my configuration since, executing a LDAP query with ldapsearch seems fine proftpd.conf /etc/proftpd.conf # This is the ProFTPD configuration file ServerName "FTP and Ldap" ServerType standalone ServerAdmin [email protected] AuthOrder mod_ldap.c LoadModule mod_ldap.c DefaultServer on ShowSymlinks on RootLogin off UseIPv6 off AllowLogSymlinks on IdentLookups off UseReverseDNS off Umask 077 User ftp Group ftp DefaultRoot /home/ftp/%u/ DefaultChDir ftp RequireValidShell off UseFtpUsers off SystemLog /var/log/proftpd/proftpd.log TransferLog /var/log/proftpd/xferlog DefaultTransferMode binary <IfModule mod_ldap.c> LDAPServer domaincontroller.domain.net LDAPAttr uid sAMAccountName LDAPDNInfo cn=linux.ldap,ou=users,ou=resources,dc=domain,dc=net password LDAPAuthBinds on LDAPDoAuth on "dc=domain,dc=net" (&(sAMAccountName=%v)(objectclass=User)) LDAPQueryTimeout 15 LDAPGenerateHomedir on LDAPGenerateHomedirPrefix /home/ftp #uid e guid of the local global user LDAPDefaultUID 14 LDAPDefaultGID 50 LDAPForceDefaultUID on LDAPForceDefaultGID on </IfModule> <Directory /*> AllowOverwrite on </Directory> proftpd -nd10 -> "search failed" Running proftpd with a debug level of 10 I got these logs while authenticating with my user (nicolas): proftpd -nd10 dispatching CMD command 'PASS (hidden)' to mod_auth mod_ldap/2.8.22: generated filter dc=domain,dc=net from template dc=domain,dc=net and value nicolas mod_ldap/2.8.22: generated filter (&(sAMAccountName=nicolas)(objectclass=User)) from template (&(sAMAccountName=%v)(objectclass=User)) and value nicolas mod_ldap/2.8.22: attempting connection to ldap://domaincontroller.domain.net/ mod_ldap/2.8.22: set protocol version to 3 mod_ldap/2.8.22: connected to ldap://domaincontroller.domain.net/ mod_ldap/2.8.22: successfully bound as cn=linux.ldap,ou=users,ou=resources,dc=domain,dc=net password mod_ldap/2.8.22: set dereferencing to 0 mod_ldap/2.8.22: set query timeout to 15s mod_ldap/2.8.22: pr_ldap_search(): LDAP search failed: Operations error ldapsearch works But ldapsearch on the other hand works just fine: [root@ftp2 ~]# ldapsearch -x -W -D "cn=linux.ldap,ou=users,ou=resources,dc=domain,dc=net" -h domaincontroller.domain.net -b "dc=domain,dc=net" -LLL "(SAMAccountName=nicolas)" Enter LDAP Password: dn: CN=Nicolas XXXXXXX,OU=XXXXXXX,OU=XXXXXXX,OU=XXXXXXX,DC=XXXXXXX,DC=XXXXXXX objectClass: top objectClass: person objectClass: organizationalPerson objectClass: user cn: Nicolas XXXXXXX sn: XXXXXXX description:XXXXXXX givenName: XXXXXXX distinguishedName: Any clues?
To achieve this, the URL must be RFC 2255 compliant and, using Proftpd queries will only work when they are filtered by an OU. These queries will not work at LDAP root level. LDAPServer ldap://domaincontroller.domain.net:389/??sub Organizational Unity: LDAPDoAuth on "OU=OFFICE,dc=domain,dc=net" (&(sAMAccountName=%v)(objectclass=User)) Umask inside the dir. The limits are just for safety <Directory /> Umask 022 022 AllowOverwrite on <Limit MKD XMKD CDUP XCUP CWD XCWD RMD XRMD> DenyAll </Limit> </Directory>
Configuring proftpd and mod_ldap.c query not working - any ideas?
1,452,599,967,000
I'm on CentOS v6.4 and using its native FTP Server, which i suppose is sftp. (Am i right?) Now i can use FTP well. But i need to log the actions taken by Users. Logs for the actions, such us, who logged in, who modified which files, who deleted which files .. etc the basically important actions, you know. So my simple questions would be: Where & how can i access/check the FTP Logs from Server, please? Can it even be done with default SFTP? (Do i need vsftpd?) In short words, what is the best & simplest way to get the FTP Logging, please?
You can log sftp, try this: In /etc/ssh/sshd_config file, change this line: Subsystem sftp /usr/libexec/openssh/sftp-server to: Subsystem sftp /usr/libexec/openssh/sftp-server -l INFO -f AUTH Then config syslog log facility AUTH to your file. In Centos 6. edit /etc/rsyslog.conf, add this line: auth.* /var/log/sftp.log After making these changes reload (kill -HUP) or restart sshd and restart rsyslog for them to take effect.
(CentOS) default FTP (SFTP) Log File?
1,452,599,967,000
First of all: Why I'm trying this? Because we need to download some files AND rename to shorten ones with date "stamps". The remote files have really huge filenames and it's not an option to change(isn't our ftp). I'm trying to make a bulk download and rename of some files in a remote ftp server, without having to open one ftp connection to each file I have to download. So far, i could achieve renaming and download on-the-fly with nmap ftp command, renaming every file that starts with "N" and ends with ".TXT" to "N_date_time_stamp.TXT" ftp -niv $url << FTP_COMMAND user $user $password cd $remotedir nmap N*.TXT N_`date "+%H%M%N"`.TXT mget N* bye FTP_COMMAND The problem is: nmap keeps the same %N value to all files passed to mget, and it should change on every download to the current nanosecond value: 250 CWD command successful. local: N_1054232349627.TXT remote: NO2346662345257245624572457245724562411125555341346134771345123461146-44.TXT 227 Entering Passive Mode (xxxxxxxxxxxxxxx). 125 Data connection already open; Transfer starting. 226 Transfer complete. 2220 bytes received in 0,0995 secs (22 Kbytes/sec) local: N_1054232349627.TXT remote: NO2346662345257245624572457245724562411125555341346134771345123461146-45.TXT 227 Entering Passive Mode (xxxxxxxxxxxxxx). 125 Data connection already open; Transfer starting. 226 Transfer complete. 2220 bytes received in 0,107 secs (20 Kbytes/sec) Is there a way to update the nmap on each download?
Well, i did some sort of mixed implementation, based on the answers of Stephane and slm. I couldn't use zsh because is a production server and installing a new shell is not an option, so, i used lftp that was installed: Explanation: On the first here_docs(FTPLISTGET) connect on the ftp server and list the files(nlist). If the listing was successfull( if [ $? -eq 0 ] ) download, one file by one renaming with the current date on the format year,month,day,hour,minute,nanosecond). Some ftps are blazing fast, and saving the second could overwrite the files. exec_ftp(){ # LIST LIST_FTP=`lftp $protocol://$url << FTPLISTGET user $user $pass nlist bye FTPLISTGET` # Check if list is not empty, proceed... if [ $? -eq 0 ]; then echo "$LIST_FTP" | while read file do DEST="N_$(date +%Y%m%d%H%M%N).TXT" lftp $protocol://$url <<-DOWNLOAD user $user $pass cd $remotedir get $file -o /home/user/$DEST rm $file bye DOWNLOAD echo "Done in $(date +%d/%m/%Y-%T)" >> /var/log/transfer_ftp.log done # If listing is not possible, else echo "FTP: $url user: $user - Cant reach host, or wrong credentials" >> /var/log/transfer_ftp_error.log fi } Edit 1: Changed backticks to $(...) as suggested by slm, and added the variable $protocol. Why? Because lftp can download and automate sftp and ftps, and this will be pretty good to us :)
FTP bulk download and rename
1,452,599,967,000
I just installed vsftpd according to these directions. I am trying to get ftp working on my Ubuntu box that is using Amazon AWS. When I first tried this directions, it did not work. I was trying to connect via FileZilla and Winscp from my windows machine to my Ubuntu server. When it failed, I tried adding these options to my /etc/vsftpd.conf file. Specifically: pasv_enable=YES pasv_min_port=64000 pasv_max_port=64321 port_enable=YES pasv_address=<your-publicly-resolvable-host-name> pasv_addr_resolve=YES <or> NO This did not help. Finally, what did work was switching winSCP into "Active Mode". My question is: What do those different parameters mean? I am assuming that in is to enable passive mode, and to help guide the ports used for passive mode, but I am not sure what port_enable pasv_address and pasv_addr_resolve do. Also, now that I am using active mode, do I need to have any of those entries? Thank you
There is (obviously) manual page for vsftpd.conf, which is always a good place to start. TLDR version: They should be needed only for passive mode of FTP. pasv_enable Set to NO if you want to disallow the PASV method of obtaining a data connection. Default: YES pasv_address Use this option to override the IP address that vsftpd will advertise in response to the PASV command. Provide a numeric IP address, unless pasv_addr_resolve is enabled, in which case you can provide a hostname which will be DNS resolved for you at startup. Default: (none - the address is taken from the incoming connected socket) pasv_addr_resolve Set to YES if you want to use a hostname (as opposed to IP address) in the pasv_address option. Default: NO
What does pasv_enable and related fields mean in vsftpd.conf
1,452,599,967,000
I'm trying to download a file from ftp server using curl: curl --user kshitiz:pAssword ftp://@11.111.11.11/myfile.txt -o /tmp/myfile.txt -v curl connects to the server and freezes: * Hostname was NOT found in DNS cache * Trying 11.111.11.11... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Connected to 11.111.11.11 (11.111.11.11) port 21 (#0) < 220-You Are Attempting To Access a Private < 220-Network. Unauthorized Access is Strictly < 220-Forbidden. Violators Will be Prosecuted! < 220-- Management < 220 This is a private system - No anonymous login > USER kshitiz < 331 User kshitiz OK. Password required > PASS pAssword < 230-OK. Current directory is / < 230 4432718 Kbytes used (54%) - authorized: 8192000 Kb > PWD < 257 "/" is your current location * Entry path is '/' 0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0> EPSV * Connect data stream passively * ftp_perform ends with SECONDARY: 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:02 --:--:-- 0< 229 Extended Passive mode OK (|||10653|) * Hostname was NOT found in DNS cache * Trying 11.111.11.11... * Connecting to 11.111.11.11 (11.111.11.11) port 10653 0 0 0 0 0 0 0 0 --:--:-- 0:00:03 --:--:-- 0* Connected to 11.111.11.11 (11.111.11.11) port 21 (#0) > TYPE A 0 0 0 0 0 0 0 0 --:--:-- 0:04:02 --:--:-- 0^C Connecting with ftp and fetching a file works however: Status: Connecting to 11.1.1.11:21... Status: Connection established, waiting for welcome message... Response: 220-You Are Attempting To Access a Private Response: 220-Network. Unauthorized Access is Strictly Response: 220-Forbidden. Violators Will be Prosecuted! Response: 220-- Management Response: 220 This is a private system - No anonymous login Command: USER kshitiz Response: 331 User kshitiz OK. Password required Command: PASS ****** Response: 230-OK. Current directory is / Response: 230 4432718 Kbytes used (54%) - authorized: 8192000 Kb Status: Server does not support non-ASCII characters. Status: Connected Status: Starting download of /myfile.txt Command: CWD / Response: 250 OK. Current directory is / Command: PWD Response: 257 "/" is your current location Command: TYPE I Response: 200 TYPE is now 8-bit binary Command: PASV Response: 227 Entering Passive Mode (10,9,4,66,39,139) Command: RETR myfile.txt Response: 150 Accepted data connection Response: 226-File successfully transferred Response: 226 0.000 seconds (measured here), 3.39 Kbytes per second Status: File transfer successful, transferred 1 B in 1 second What's the deal with the TYPE A command? Why doesn't curl work when ftp does?
Adding --disable-epsv switch fixed the problem. A little explanation: I just went through many hours of trying to figure out wierd FTP problems. The way that the problem presented was that after login, when the FTP client attempted a directory listing (or any other command), it would just hang. EPSV is "extended passive mode", and is a newer extension to FTP's historical passive mode (PASV) ... most recent FTP clients attempt EPSV first, and then only use the traditional PASV if it fails. ... if the firewall is blocking EPSV, the client will think that the command is successful [and keep waiting for response]. Read more here.
Curl freezes when downloading from ftp
1,452,599,967,000
I have some folder, need anybodies php scripts can create sub folder/files and unlink files. I do sudo chown -R apache:apache /var/www/public_html/a But after that, my ftp user cannot upload files in that folder. And I do sudo chown -R yulichika:users /var/www/public_html/a that ftp can access the folder, but anybodies php scripts with a wrong permisson. I do not want set the whole folder with 0777, so how to chown 2 users with the permission to operate the same folder? thanks.
You can use access control lists commands. First set owner apache to the directory sudo chown -R apache:apache /var/www/public_html/a Now set acl so that ftp user can upload folders. FOR USER sudo setfacl -R -m u:yulichika:rwx /var/www/public_html/a FOR GROUP sudo setfacl -R -m g:users:rwx /var/www/public_html/a Hope this will solve your problem.
centos folder permission ftp user and apache
1,452,599,967,000
I have an interesting problem where I have the ability to upload files, change permissions, download files using FTP on Cent OS. However, the interesting and annoying part is that the files are completely blank (0 bytes) when they get uploaded. What might be the trouble here? Here is the log from the client (FileZilla): Status: Starting upload of C:\gettweetmodel_dev.php Status: Retrieving directory listing... Command: TYPE I Response: 200 Switching to Binary mode. Command: PASV Response: 227 Entering Passive Mode. Command: LIST -a Response: 150 Here comes the directory listing. Response: 226 Directory send OK. Command: TYPE A Response: 200 Switching to ASCII mode. Command: PASV Response: 227 Entering Passive Mode Command: STOR gettweetmodel_dev.php Response: 150 Ok to send data. Response: 451 Failure writing to local file.
There are three main possibilities attached to that error code. You either don't have permission to upload to that directory, the disk on the server is full, or uploading the file would exceed your user's disk quota. Ftp 4xx error codes are "Transient Negative Completion reply" codes. In other words, these error codes are returned when the server fails to do something. Specifically, error code 451 indicates that the server could not write to a file. If it is true that you are able to create files of zero size in the remote directory, then the possibility of a permission error is most likely ruled out. If you can contact the server's administrator, you should be able to determine the exact problem.
File uploads successfully however it is 0 bytes
1,452,599,967,000
My /var/log/messages is full of messages like: Jan 29 01:00:02 vm2147 pure-ftpd: (?@::1) [INFO] New connection from ::1 Jan 29 01:00:02 vm2147 pure-ftpd: (?@::1) [INFO] Logout. Jan 29 01:05:02 vm2147 pure-ftpd: (?@::1) [INFO] New connection from ::1 Jan 29 01:05:02 vm2147 pure-ftpd: (?@::1) [INFO] Logout. These messages are generated every five minutes. What do they mean and is it a problem? How can I prevent them? My system is 2.6.35.12-90.fc14.x86_64.
Could you take a look into your crontab? Maybe something is defined there as a every 5-minute task.
pure-ftpd floods /var/log/messages
1,452,599,967,000
We have 2 linux RH servers, that were configured the same way. Same OS version, same ftp client, etc. The ftp client that we had installed is located in this website. http://rpm.pbone.net/index.php3/stat/4/idpl/20810117/dir/scientific_linux_6/com/ftp-0.17-53.el6.x86_64.rpm.html The permissions were already set up equally at a firewall level, for both servers. Both are on the same vlan 10.240.194.x/23 We have server A and B. Server A connects to the ftp server withouth issues, we just have to set it up in active connection. Server B does connect as well and we set it up in active mode. But when we try to list files/directories, find current dir location, or upload/download files we can't. So far the only thing we are able to do, is change to another directory. Everytime we try, to do at least a simple ls,pwd we get this msg 200 PORT command successful. 150 Opening ASCII mode data connection. #It gets stuck here for a while. 500 Command not understood. As far as I understand, that ftp client, which is installed in both servers, doesn't have anything to be changed or configured. Does anyone has an idea of what could be check/change to make the other server work. Sadly the FTP server, is not owned by our company. I tried to do some search, but haven't been lucky. Any help is appreciated.
You have to do FTP in passive mode and not active mode. If using a text client, you have to use the command: PASV If using another piece of software, you will have to find the menu for PASSIVE mode transmission. see Active FTP vs. Passive FTP, a Definitive Explanation
FTP on Linux RH -- stuck at 150 Ascii
1,452,599,967,000
if I connect to my ftp server using ftp command i'll have only a prompt waiting for me to enter one of the available commands listed using help, I am not newbie that much to believe those commands are wrappers of more sub-layered protocol ftp instructions such as 'MKD', 'SMNT', 'NLST', (and more). I would like to know if it's possible to use native ftp commands using the ftp originally installed bash tool; And if not, is there some powerful tools to achieve that ?
Use quote (send arbitrary ftp command) to achieve that! https://linux.die.net/man/1/ftp
how to send native ftp words to one ftp server?
1,373,228,270,000
Can I use ProFTPd without using a chroot jail (thereby preventing access to anything outside of the FTP root)? I have a requirement to have symlinks in my FTP source that point to locations outside of the directory where I root my FTP service. All of the docs and discussion I've read on ProFTPd talk about how to use the chroot functionality (even within StackExchange), but I'm wondering if I can bypass using that and use a different method to serve my FTP root. Since the symlinks must remain as symlinks, mounting the directories as a way of bypassing the chroot restriction (the clever "solution" to the problem) does not work.
ProFTPD has the mod_vroot module for this purpose. You can compile this module into ProFTPD yourself or install it if your repositories have it (apt-get install proftpd-mod-vroot for certain Debian repositories). mod_vroot allows a user to configure a "virtual chroot", setting the DefaultRoot directive (the initial/root directory for a session; ProFTPD would chroot to this directory without mod_vroot), but allowing symbolic links to point outside of the DefaultRoot path. mod_vroot also supports the VRootServerRoot directive, to which ProFTPD will perform a real chroot, meaning that symlinks can point outside of the DefaultRoot, but must target locations within the VRootServerRoot path. Example config: <IfModule mod_vroot.c> VRootEngine on VRootServerRoot /usr/share/ # Symlinks can only point to location within /usr/share/ VRootOptions allowSymlinks DefaultRoot /usr/share/ftproot/ </IfModule>
Implement ProFTPd server without chroot
1,373,228,270,000
Here is the .netrc file and the command I use, any idea what is wrong here? [root@localhost ~]# cat /root/.netrc machine ftp.nyxdata.com login anonymous password empty macdef download_nyse_index cd /OpenBook/SymbolMapping bin get SymbolMap.xml /tmp/SymbolMap.xml quit [root@localhost ~]# ftp ftp> $ download_nyse_index 'download_nyse_index' macro not found. ftp> bye [root@localhost ~]# uname -a Linux localhost 3.17.8-200.fc20.i686+PAE #1 SMP Thu Jan 8 23:45:44 UTC 2015 i686 i686 i386 GNU/Linux [root@localhost ~]# which ftp /bin/ftp [root@localhost ~]# rpm -qf /bin/ftp ftp-0.17-65.fc20.i686
A macdef directive, like the others (login, password, etc.), only applies to the machine-headed stanza that it's in. As far as I know, putting a macdef directive before the first machine stanza has no effect, and there's no way to have a macro available before the open command is executed. Your macro will work if you open a connection to the site first: $ ftp ftp.nyxdata.com ftp> $download_nyse_index To script a full FTP session, pass input to the ftp command. ftp ftp.nyxdata.com <<EOF cd /OpenBook/SymbolMapping bin get SymbolMap.xml /tmp/SymbolMap.xml quit EOF If you're doing something this simple, use wget or curl. If you don't want the password to be on the command line, you can put in in ~/.netrc; wget reads it by default, and curl reads it if you pass the -n option. wget -O /tmp/SymbolMap.xml ftp://ftp.nyxdata.com/OpenBook/SymbolMapping/SymbolMap.xml
I have defined a macro in .netrc file but when I execute it from ftp it says "macro not found"
1,373,228,270,000
Currently running CentOS 6.5 with vsftpd. I would like to explain my process and then have the proper process explained back to me from scratch which I believe will solve my issue. I am currently able to log into my server via FTP from my "root" user account, but I understand that is bad practice. So what I need to do is create another local user/virtual user (I really do not know) to be able to access via FTP the "/var/www" directory. (I'm simply needing to get to the point were I can begin uploading web files, as I'm a web programmer, not a system administrator -- but I was so pleasantly surprise with a dedicated server to work with.) Initially I created a Local User, but was only able to FTP the "home" user directory. So I next tries unjailing that user via CHROOT (vsftpd.conf). That worked sort of; the parent directories were visible, but upon navigating up to them via FTP everything disappeared (possibly an issue with permissions, I don't know). Next I tried rejailing the Local User and then modifying its "home" directory from "/home/" to "/var/www". After doing attempting that, I FTP'd in and then could not see anything, so another fail. I've since returned the user's "home" directory back to "/home/" and crawl over to SOF confused as hell. vsftpd.conf # Allow anonymous FTP? (Beware - allowed by default if you comment this out). anonymous_enable=NO # # Uncomment this to allow local users to log in. local_enable=YES # # Uncomment this to enable any form of FTP write command. write_enable=YES # # Default umask for local users is 077. You may wish to change this to 022, # if your users expect that (022 is used by most other ftpd's) local_umask=022 # # Uncomment this to allow the anonymous FTP user to upload files. This only # has an effect if the above global write enable is activated. Also, you will # obviously need to create a directory writable by the FTP user. #anon_upload_enable=YES # # Uncomment this if you want the anonymous FTP user to be able to create # new directories. #anon_mkdir_write_enable=YES # # Activate directory messages - messages given to remote users when they # go into a certain directory. dirmessage_enable=YES # # The target log file can be vsftpd_log_file or xferlog_file. # This depends on setting xferlog_std_format parameter xferlog_enable=YES # # Make sure PORT transfer connections originate from port 20 (ftp-data). connect_from_port_20=YES # # If you want, you can arrange for uploaded anonymous files to be owned by # a different user. Note! Using "root" for uploaded files is not # recommended! #chown_uploads=YES #chown_username=whoever # # The name of log file when xferlog_enable=YES and xferlog_std_format=YES # WARNING - changing this filename affects /etc/logrotate.d/vsftpd.log xferlog_file=/var/log/xferlog # # Switches between logging into vsftpd_log_file and xferlog_file files. # NO writes to vsftpd_log_file, YES to xferlog_file xferlog_std_format=YES # # You may change the default value for timing out an idle session. #idle_session_timeout=600 # # You may change the default value for timing out a data connection. #data_connection_timeout=120 # # It is recommended that you define on your system a unique user which the # ftp server can use as a totally isolated and unprivileged user. #nopriv_user=ftpsecure # # Enable this and the server will recognise asynchronous ABOR requests. Not # recommended for security (the code is non-trivial). Not enabling it, # however, may confuse older FTP clients. #async_abor_enable=YES # # By default the server will pretend to allow ASCII mode but in fact ignore # the request. Turn on the below options to have the server actually do ASCII # mangling on files when in ASCII mode. # Beware that on some FTP servers, ASCII support allows a denial of service # attack (DoS) via the command "SIZE /big/file" in ASCII mode. vsftpd # predicted this attack and has always been safe, reporting the size of the # raw file. # ASCII mangling is a horrible feature of the protocol. #ascii_upload_enable=YES #ascii_download_enable=YES # # You may fully customise the login banner string: #ftpd_banner=Welcome to blah FTP service. # # You may specify a file of disallowed anonymous e-mail addresses. Apparently # useful for combatting certain DoS attacks. #deny_email_enable=YES # (default follows) #banned_email_file=/etc/vsftpd/banned_emails # # You may specify an explicit list of local users to chroot() to their home # directory. If chroot_local_user is YES, then this list becomes a list of # users to NOT chroot(). chroot_local_user=YES chroot_list_enable=NO # (default follows) #chroot_list_file=/etc/vsftpd/chroot_list # # You may activate the "-R" option to the builtin ls. This is disabled by # default to avoid remote users being able to cause excessive I/O on large # sites. However, some broken FTP clients such as "ncftp" and "mirror" assume # the presence of the "-R" option, so there is a strong case for enabling it. #ls_recurse_enable=YES # # When "listen" directive is enabled, vsftpd runs in standalone mode and # listens on IPv4 sockets. This directive cannot be used in conjunction # with the listen_ipv6 directive. listen=YES # # This directive enables listening on IPv6 sockets. To listen on IPv4 and IPv6 # sockets, you must run two copies of vsftpd with two configuration files. # Make sure, that one of the listen options is commented !! #listen_ipv6=YES pasv_enable=YES pasv_min_port=50000 pasv_max_port=51000 port_enable=YES pasv_address=xxx.xxx.xxx.xxx pasv_addr_resolve=NO pam_service_name=vsftpd userlist_enable=YES tcp_wrappers=YES Any help is greatly appreciated.
First of all I'd create a symlink between /var/www/ and my home what this does is, when you land into /home/usr you can go to /home/usr/www and it will redirect you to /var/www for instance. cd /home/usr sudo ln -s /var/www www perform a ls -lrt on /var/www ls -lrt /var/www/ now make sure your usr is part of group that owns www. this tells you who owns that directory, if it's root:root, it's a bad practice, depending on your distro, it could be www-data or apache etc.. cat /etc/group | grep -e apache -e http -e ftp -e www apache:x:48: if usr is at the end of the result your usr is part of that group if you don't have a group that owns www and it's root:root create one groupadd www-data assuming that group is www-data sudo adduser usr www-data now make your user the boss of www sudo chown usr:www-data -R /var/www set the right permissions of www sudo chmod 0755 -R /var/www sudo chmod g+s -R /var/www
CentOS Local User not able to view directories/files via FTP login
1,373,228,270,000
My question is twofold. I need to somehow re-enable telnet and/or ssh in a device where I currently only have root ftp access. Also, if I am going about this wrong, please let me know. Details to follow... Background I have a special development board running Arm Linux with the 2.6 kernel. (The device is the 9G45, but that is not really relevant to my question.) I was happily monkeying around with settings and such, I somehow caused sshd to stop working. Upon restart, ssh connections were refused. Troubleshooting I have found that Lighttpd is working and serving up pages. This confirms that both network and some of the initialization works. I have found that vsftpd is functioning and I do have root access here. I can connect to telnet, but the documented default user named "guest" is not working. Neither is root. Using ftp, I determined that the sshd pid file is not present, so I assume it is not running. Unfortunately, connectiong via serial did not seem to work, but I have a support ticket out with the manufaturer, so maybe they will answer me. Possible solutions (needs refining) I figure that I may be able to get in through the root ftp access, monkey with an init script or something else, then possibly gain telnet access (or maybe set sshd to run later in the boot process). I am not really sure where to go from here. If someone could build on my possible solution or propose a new one it would be greatly appreciated.
Fixing files When you're limited to ftp access only on a system your only real option is to get/put files onto the device. But this turns out to often be all you need to "break" into a system. The approach you're going to want to take is to pull a file down to your local system, edit this file, and then put it back. /etc/passwd & /etc/shadow The files you'll want to start with are the /etc/passwd and /etc/shadow files. If you grab these you can then make sure that the user/name passwords are all set in there. A big hint is that since you know root's password, you can copy the password string for this user from /etc/shadow to any other corresponding user so that you now know that user's password as well (it'll be the same one that root uses). ssh access Going beyond the accounts the same approach can be used to get sshd working as well. The config file for sshd is typically in this directory, /etc/ssh/sshd_config. You'll want to make sure that this file is in a usable state so that sshd,the daemon, can come up and allow you to login. NOTE: It's often the case that user root is not allowed to ssh into systems, so you might need to temporarily allow this, just to gain ssh access to the system. You can turn it off afterwards. sshd service You'll also need to make sure that sshd the service starts up when the system boots. This one is a little trickier. Given you're using Linux 2.6 I'm going to assume that your system is using /etc/init.d for the various services that are suppose to run on your system. You can create links to the files in /etc/init.d under /etc/rc3.d such that the sshd service starts up when the system reboots. Something like this: $ ls -l /etc/rc3.d/ |grep ssh lrwxrwxrwx 1 root root 14 Aug 10 2011 S55sshd -> ../init.d/sshd starting the service Getting this service to start is a little trickier, you'll either have to reboot (risky since you might lose ftp access) or if the system has cron or at services running you could add a task to either of these that would start the service at a specific point in time, say 1 minute from now. Everything else Once you have sshd access restored you can fix telnet and anything else from that shell, so I won't go any further and discuss these since they are beyond the score of restoring access with ssh working.
Reconfigure login through root ftp (lost ssh access)
1,373,228,270,000
# grep -i ftp inetd.conf ftp stream tcp nowait root /usr/sbin/ftpd # oslevel -s 6100-06-05-1115 So I have an AIX 6 server. How can I check if FXP is enabled on the FTPD or not?
According to IBM's AIX documentation for ftpd, there is a -ff flag which: Disables checking for both a privileged port and an IP address that matches the one used for the control connection when the client requests the server to connect back to a specific client port. Using this flag enables the client to request that the server send data to an alternate host or interface. By default, ftpd does not allow this action as a security precaution. This seems as though it would allow FXP, especially since according to Wikipedia: Although FXP is often considered a distinct protocol, it is in fact merely an extension of the FTP protocol and is specified in RFC 959 If that is the case, FXP is disabled by default since the option is not specified in inetd.conf. You can confirm by adding the flag and trying to initiate an FXP transfer. The transfer should only be allowed when the flag is in place.
How do I know that FXP is enabled or not? (AIX FTPD)
1,373,228,270,000
I am setting up SFTP access to one of my machines running Linux with the Dropbear SSH server. When I SFTP onto the machine remotely, I can see the entire filesystem on it, even if I might not have write access. How to I control what directories a user can see when connecting to my machine via SFTP? For example, what if I only want to make one directory, e.g. /ftp/, visible and accessible? Thanks.
I believe you'll need to run your dropbear ssh server inside a chroot'd jail if you want to restrict it to certain directories. If you were using a recent OpenSSH, I'd suggest using the ChrootDirectory setting in your sshd_config. It doesn't appear as though dropbear has a similar parameter, so you'll have to do it manually.
Set visible directories for SFTP access?
1,373,228,270,000
I am using a script to call SFTP service to fetch some logs from the remote server. This works fine in normal scenario but, i have noticed a very long delay of around 30 minutes or more in some of the cases when the server expects the password. Following is the code i am using: #!/bin/bash dirdate=`/bin/date +%Y%m%d -d "1 day ago"` INPUT_DIR="/root/SDP_BHC/input" CREDENTIALS_FILE="/root/FTP_TEST/Credentials.csv" # Loop to read credentials and other details from an external file. while IFS=','; read node_id node_name ip1 ip2 ip3 user1 pass1 user2 pass2 user3 pass3 installed location circle sdpno hwtype do # Generate Input Directory for IP-Node mkdir -p $INPUT_DIR/"$node_id"/"$node_name"/IP/"$dirdate" echo -e "\n[INFO] Node IP: $ip1" echo -e "=============================\n" SOURCE_FILE="/var/opt/fds/statistics/*PSC-TrafficHandler_8*1_A_*_System*$dirdate*stat" TARGET_DIR="$INPUT_DIR/$node_id/$node_name/IP/$dirdate" set prompt "(%|#|\\$|%\]) $" spawn /usr/bin/sftp $user1@$ip1 expect { "$prompt" } #expect "Are you sure you want to continue connecting (yes/no)?" #bin prompt send "yes\r" #expect "Password:" expect { "$prompt" } send "$pass1\r" expect "sftp>" send "mget $SOURCE_FILE $TARGET_DIR\r" set timeout 2000 expect "sftp>" send "bye\r" EOD . . . done < $CREDENTIALS_FILE The script hangs in some cases like when the password updated in the CREDENTIALS_FILE is not matching (because of password change in the source server from where file is being fetched). In such case, the password entry gets two responses (yes and $pass1) and then hangs for third password prompt since first two are wrong. Here, it hangs for a very long time (in some cases it doesn't which is another confusion i have why it doesn't happen in those cases). Anyways, whatever the case maybe i want to make the SFTP session timeout quickly if any of such scenario happens. How do i make the session to get a timeout quickly if the passwords do not match at all? I am already using a timeout. For a normal session, my script takes a max of 30-35 seconds to connect to the servers and FTP the files to the local machine. Could anyone please provide some pointers regarding how to fix this issue?
You can add to the sftp command the option -o NumberOfPasswordPrompts=1 to stop it asking twice for passwords. You might usefully also try -o ConnectTimeout=20 -o ConnectionAttempts=1
Huge delay in script response when using SFTP with expect
1,373,228,270,000
I have a .netrc with the following format in a directory using a bash terminal on OSX: machine m... login l... password p... I have chowned and cmoded the file like this, as suggested in this help file $ chown myusername .netrc $ chmod +600 .netrc When I am in the directory, I issue the following command: $ftp m... Based on this tutorial (see section "useage of the .netrc") it seems like unix should automatically detect that there is a .netrc file in the directory when I type ftp m.., see that the .netrc gives credentials for machine . and pass them along to ftp. This is not happening. Instead, I see this output Connected to server.domain.com 220 Microsoft FTP Service Name (file.server.domain.com:My_OSX_Username): #I would expect it would pull my username off the .netrc file Am I using the .netrc file incorrectly?
Yes, in general the .netrc file should just work as you've described. It might be getting blocked by your FTP client not supporting it (perhaps it was explicitly built without this feature enabled). This would likely be done since this method of storing usernames/passwords is inherently insecure and should likely not be used. Checking with strace You can always confirm what an application is doing by tracing it using the command line tool, strace. $ strace -s 2000 -o ftp.log ftp m After doing this you can analyze the log file, ftp.log to see if your ftp client is attempting to read the file .netrc. If it's working correctly you'll see this type of output in the log. setsockopt(3, SOL_SOCKET, SO_OOBINLINE, [1], 4) = 0 open("/home/saml/.netrc", O_RDONLY) = 4 uname({sys="Linux", node="greeneggs.bubba.net", ...}) = 0 fstat(4, {st_mode=S_IFREG|0600, st_size=47, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f4898b85000 read(4, "machine ftp.somedom.org\n\tlogin sam\n\tpassword blah\n", 4096) = 47 fstat(4, {st_mode=S_IFREG|0600, st_size=47, ...}) = 0 read(4, "", 4096) = 0 close(4) = 0 NOTE: In the above I'm connecting to ftp.somedom.org as user "sam" using password "blah". You can see that it read from my .netrc file. ftp man page If you take a look at the ftp man page there is this switch that's mentioned in my version: -n Restrains ftp from attempting “auto-login” upon initial connection. If auto-login is enabled, ftp will check the .netrc (see netrc(5)) file in the user's home directory for an entry describing an account on the remote machine. If no entry exists, ftp will prompt for the remote machine login name (default is the user identity on the local machine), and, if necessary, prompt for a password and an account with which to login. open host [port] Establish a connection to the specified host FTP server. An optional port number may be supplied, in which case, ftp will attempt to contact an FTP server at that port. If the auto-login option is on (default), ftp will also attempt to automatically log the user in to the FTP server (see below). I'm on Fedora 19: $ rpm -qi ftp Name : ftp Version : 0.17 Release : 64.fc19 Architecture: x86_64
Do I just need to include a .netrc file to have UNIX pick it up?
1,373,228,270,000
I have recently installed DD-WRT on my D-Link DIR-615 router (rev. D3). Everything seems fine but I cannot connect to some (most) FTP servers anymore. Any ideas what can cause this issue? -- Passive/active doesn't make a difference. I've discovered something interesting. I've set up a port range forwarding 1024-65535 | Both | 192.168.1.131 for the testing purposes. After that I have enabled or disabled (it doesn't really matters it seems) the UPnP service and it let me connect to FTP but just for a few seconds.
This is an old topic, but someone might still find my solution useful: update your firmare. DD-WRT provides updated versions and the last one I could find here http://dd-wrt.com/site/support/other-downloads?path=others%2Feko%2FBrainSlayer-V24-preSP2%2F works. I can finally use FTP.
FTP issues after installing DD-WRT
1,373,228,270,000
I have some automated scripts which perform FTP uploads among other things. I'm wondering what level of error checking I should conduct once these uploads have finished executing. Could anything go wrong uploading a file when it reports "226 Transfer complete" which would warrant extra tests, besides checking for this string in the log, to check if a file was successfully uploaded?
No, it does not. 226 can also occur on various conditions where this would not be true (ABORT for one). See RFC 959.
Does "226 Transfer complete" guarantee the consistency between local and remote files when using ftp?
1,373,228,270,000
I can do tcpdumps with this command: tcpdump -w `date +%F-%Hh-%Mm-%Ss-%N`.pcap src 10.10.10.10 or dst 10.10.10.10 Q: I have an FTP server with username: FTPUSER and password FTPPASSWORD. How can I upload the tcpdump in "real time" I mean I don't have a too big storage to store the dumps, so I need to upload it to a place what I can only reach via FTP. Can I "pipe" somehow the output of the tcpdump to an ftp client that uploads it? [I need to preserve the filenames too: "date +%F-%Hh-%Mm-%Ss-%N.pcap"] so I'm searching for a solution that doesn't store any tcpdumps locally, rather it uploads the dumps in "real-time". The OS is OpenWrt 10.03 - the router where the tcpdump runs. [4MB flash on the router, that's why I can't store them locally.] UPDATE2: there is no SSH connection to the FTP server, just FTP [and FTPES, but that doesn't matter now I think]
install curlftpfs opkg update; opkg install curlftpfs then create a script that will run after every boot of the router vi /etc/rc.d/S99tcpdump the content of S99tcpdump #!/bin/ash mkdir -p /dev/shm/something curlftpfs FTPUSERNAMEHERE:[email protected] /dev/shm/something/ tcpdump -i wlan0 -s 0 dst 192.168.1.200 or src 192.168.1.200 -w "/dev/shm/something/tcpdump-`date +%F-%Hh-%Mm-%Ss`.pcap" & make it executable chmod +x /etc/rc.d/S99tcpdump reboot router, enjoy. p.s.: looks like "-s 0" is needed because there could be messages like: "packet size limited when capturing, etc." - when loading the .pcap files in wireshark p.s.2: make sure the time is correct because if not, the output filename could be wrong..
How to upload tcpdumps in realtime to FTP?
1,373,228,270,000
Ok, let's say I am responsible for old application that I don't really know the details of, I am trying to secure my server, and someone suggested to forbid the 21 port that used for FTP. But I am not sure which programs are running and use FTP on a day to day basis. Say I don't have the solution to install the tools I want on the server or on the Network. What are my solutions? Should I cut the 21 port and see what transfers are blocked by the firewall? Is there a place on the AIX server where I should go and look to see the list of accessed and accessing servers via this port? If I have the IP of the server(s) and the file names, I will be able to track the program doing it. So is there a log file for FTP? Edit: a) inetd is active and ftp is in it (thanks @JeffSchaller) b) I am trying to know incoming and outgoing traffic on this port - with the commands that were performed (if possible). In other words, my goal is to know What commands have been performed on the local FTP server What commands have been performed by the local FTP client to other servers Any suggestion welcome.
What commands have been performed on the local FTP server? To enable FTP logging on an AIX system, you need to reconfigure FTP (being called by inetd in your case) to send debug logs to syslog and to configure syslog to save those logs to a file. Edit /etc/inetd.conf and add -d to the end of the ftpd line: ftp stream tcp6 nowait root /usr/sbin/ftpd ftpd -d Refresh inetd: refresh -s inetd Edit /etc/syslog.conf and add a line for daemon.debug to save the logs somewhere: daemon.debug /var/log/ftp.log Create a file for syslog to write to: touch /var/log/ftp.log Refresh syslogd: refresh -s syslogd Syslog will send any daemon's logs to this file, so you'll want to filter it down with grep, perhaps: grep 'daemon:debug ftpd' /var/log/ftp.log. Commands that were sent via FTP will be logged with the string command:; here's a sample: May 18 10:13:35 ftpserver daemon:debug ftpd[3932700]: command: USER username-here^M May 18 10:13:35 ftpserver daemon:debug ftpd[3932700]: <--- 331 May 18 10:13:35 ftpserver daemon:debug ftpd[3932700]: Password required for username-here. May 18 10:13:42 ftpserver daemon:debug ftpd[3932700]: command: PASS May 18 10:13:42 ftpserver daemon:debug ftpd[3932700]: <--- 230- May 18 10:13:42 ftpserver daemon:debug ftpd[3932700]: Last login: Fri May 18 10:13:02 EDT 2018 on ftp from ftpclient.example.com May 18 10:13:43 ftpserver daemon:debug ftpd[3932700]: <--- 230 May 18 10:13:43 ftpserver daemon:debug ftpd[3932700]: User username-here logged in. May 18 10:13:43 ftpserver daemon:debug ftpd[3932700]: command: PORT 10,1,1,1,229,54^M May 18 10:13:43 ftpserver daemon:debug ftpd[3932700]: <--- 200 May 18 10:13:43 ftpserver daemon:debug ftpd[3932700]: PORT command successful. May 18 10:13:43 ftpserver daemon:debug ftpd[3932700]: command: LIST^M May 18 10:13:43 ftpserver daemon:debug ftpd[3932700]: <--- 150 May 18 10:13:43 ftpserver daemon:debug ftpd[3932700]: Opening data connection for /bin/ls. May 18 10:13:43 ftpserver daemon:debug ftpd[3932700]: <--- 226 May 18 10:13:43 ftpserver daemon:debug ftpd[3932700]: Transfer complete. Yes, those Control-M's appear as such in the logs! What commands have been performed by the local FTP client to other servers? Since applications could perform their own FTP actions, it'd be difficult to wrap every possible client program (such as /usr/bin/ftp) to catch this. The best bet is to configure the remote FTP server to log the commands, just as we did above. Second-best would be to configure the AIX firewall to allow-and-log traffic destined for port 21. Ensure you have the ipsec fileset installed: lslpp -L bos.net.ipsec.rte; echo $? It should show a fileset listed with a return code of 0, and not: lslpp: 0504-132 Fileset bos.net.ipsec.rte not installed. Ensure the ipsec devices are enabled: lsdev -l ipsec_v4 You should get one line back saying "Available", not "Defined" or no lines back at all. If there was no output or the device was "Defined": run smitty ipsec4 choose Start/Stop IP Security, choose Start IP Security, leave the defaults at Now and After Reboot and Deny All Non_Secure = no hit Enter. The ipsec device_v4 should now show as "Available". Create a logging file with: touch /var/log/ipsec.log. Update syslog: echo "local4.debug /var/log/ipsec.log rotate size 100k files 4" >> /etc/syslog.conf refresh -s syslogd Add a rule to allow and log traffic destined for port 21: # -v 4 == IPv4 # -n 2 == add this after the first rule # -a P == permit # -O eq == destination port *equals* 21 # -P 21 == destination port 21 # -w O == outbound connections; change this to “B” to log in both directions # -c tcp == TCP protocol # -s, -m, -d, -M = source/dest IP & mask (any) # -l Y = Log it # -r L = applies only to packets destined or originated from the local host genfilt -v 4 -n 2 -a P -O eq -P 21 -w O -c tcp -s 0.0.0.0 -m 0.0.0.0 -d 0.0.0.0 -M 0.0.0.0 -l Y -r L -D “allow and log port 21 traffic” Start logging: mkfilt -g start Activate the ruleset: mkfilt -u Wait for outbound FTP connections to occur, then: grep ipsec_logd /var/log/ipsec.log | grep DP:21 You'll see source and destination IPs for outbound FTP connections along with the timestamps, such as: May 18 11:29:40 localhost local4:info ipsec_logd: #:0 R:p O:10.1.1.1 S:10.1.1.1 D:10.2.2.2 P:tcp SP:55091 DP:21 R:l I:en0 F:n T:0 L:0 It doesn't log the content (commands) of the FTP session, but you'll have timestamps and destinations. Note that every packet of each FTP connection is logged! References: ftpd daemon (IBM Knowledge Center) Setting up a firewall with AIX TCP/IP filtering
Is there a place where FTP transfers are logged?
1,373,228,270,000
I've been searching and I cannot find an answer on any of these sites that will work. I want to copy all of the contents of this folder to my server. Which Linux command should I use to do that? I don't want to manually run through all of the objects in that folder, one at a time per se. I've seen everything from scp to rsync, but can't get either to work. Thanks,
you can use something like cd directory-where-you-want-to-put-the-files wget -r ftp://ftp.eso.org/pub/qfits/
Copy contents of remote server folder to current server?
1,373,228,270,000
I am trying to ftp some recording files to a remote server for backup every night. I am very confused regarding shell script. My question / problem is : I want to move the whole folder/directory instead of file to the remote server. Here is current script: HOST='10.113.68.50' USER='sms' PASSWD='Abc123451' LOCALPATH='kmpy/unica/Campaign/partitions/partition1/CiktiDosyalari' FILE=*.sms DIR='SMS/' ftp -n $HOST <<EOF quote USER $USER quote PASS $PASSWD cd $DIR lcd $LOCALPATH put $FILE quit exit; EOF
you can use mput * instead of put to upload all of the files in the directory. Further you can screen files, for example: mput *.jpg will transfer all and only jpg files.
Shell script: whole directory to the remote ftp server
1,373,228,270,000
I want to be able to store the login details all of my remote FTP accounts in one place, securely. Then I'd like clients like autofs/curlftpfs, Filezilla, etc to use this one store of passwords. I'm not interested in GUI-dependent solutions. Is this possible somehow?
Ideally those would be SFTP accounts, using SSH public key authentication rather than passwords. You'd gain both security and convenience. But let's assume you don't have a choice of not using FTP with passwords. You could store the passwords (the .netrc file) on an encrypted filesystem and mount that filesystem only when you want to access it. A simple way to create an encrypted directory tree is encfs. Setup: # install encfs, e.g. apt-get install encfs mkdir ~/.passwords.d encfs ~/.passwords.encfs ~/.passwords.d mv ~/.netrc ~/.passwords.d ln -s .passwords.d/.netrc ~ fusermount -u ~/.passwords.d Daily use: encfs ~/.passwords.encfs ~/.passwords.d ftp … fusermount -u ~/.passwords.d
A local, centralized, secure way to store FTP login data, including passwords
1,373,228,270,000
When downloading /var/log/apache2/other_vhosts_access.log (100 MB) from a distant server to my local computer via SFTP, I noticed that the network transfer was not compressed. Indeed similar compressed files are ~ 10 MB and it would have taken 1/10th of the downloading time I observed. Is there an option in SSH/SFTP settings to auto-compress file transfer to reduce bandwidth and uploading/downloading time? (The server has Ubuntu and the local computer is using Win + WinSCP).
On WinSCP, transport compression can be enabled in the SSH page on the Advanced Site Settings dialog: For an OpenSSH command line client, the -C option to sftp (passed through as the -C option to ssh) provides transport compression for the session.
Auto-compress SFTP file-transfer to reduce network data usage
1,373,228,270,000
I'm using a shell script which executes lftp mirror --reverse to upload files and directories to a remote server. Just before that it removes everything using glob -a rm -r -f *. The problem is, it is not so fast. Whole operation takes couple of minutes, especially the recursive removal. I'm uploading few megabytes of data in a couple of hundred files, but most of them do not change. I'm connecting through FTPS protocol. Question How to improve the performance of my script? I was thinking about uploading only files that are new or were changed locally and at the same time removing the ones from the remote server that are not present on my local machine. Sadly, I don't know if it is possible or how to achieve it. Whole script: lftp $host << EOF user $username $password cd $destination_directory glob -a rm -r -f * mirror --reverse $local_directory . exit EOF
The solution was at hand. While digging through LFTP manual I found that mirror command has a --delete option which perfectly suits my needs. --delete delete files not present at the source ~ LFTP Manual I changed glob -a rm -r -f * mirror --reverse $local_directory . to mirror --reverse --delete $local_directory .
LFTP - Remove files from remote server while uploading via lftp mirror --reverse
1,373,228,270,000
I have used following firewalld rich rule. But it is allowing me more than two FTP connections in a minute. I expected that it could allow only two connections. firewall-cmd --add-rich-rule='rule service name=ftp limit value=2/m accept' Can anyone tell me what's wrong with above rule or do we have any other way we can do using firewalld rich rule?
It may be easier adding the rule in /etc/firewalld/zones/public.xml. This should work, but I'm not sure if m (minute) is supported, h (hour) seems to be: <rule family="ipv4"> <service name="ftp"/> <log prefix="ftp fw limit 2/m " level="warning"> <limit value="2/m"/> </log> <accept> <limit value="2/m"/> </accept> </rule> Also, make sure, that there is no other accept rule for ftp! Source
How to add a firewalld rich rule that will allow only 2 FTP connections per minute to the FTP server?
1,373,228,270,000
I’m using the ftp command line tool and want combine scripted input with user input. With only user input the prompt looks like this: ftp> But when I try to insert some input with a script, like {echo "user username passwd"; cat;} | ftp -n server.tld How can I force ftp (or any cli) to still use the interactive mode? I would prefer a solution based on standard shell tools.
There is a program to interact with interactive command line tools exactly like the ftp example: expect. It is a specialized script shell, extending the on the scripting language tcl. It is very powerful, but you may get away without learning everything about it. A very useful tool is autoexpect, which can record an interactive session as an expect script. The recorded script is certainly helpful to understand the basics. A expect script can interact with an interactive program like ftp. This can be combined with interaction of the user and the interactive program. Using the command interact in an expect script, the control can be given to the user temporarily. While the user has control, the script still listens for events to take back the control, so practically, both the user and the script are interacting simultaneously with the program.
Writing to an interactive prompt
1,373,228,270,000
I have a script that is running on my computer. The script is written in PHP under my XAMP server. It actually is reading some files from a remote FTP server and after processing them writes them back to the same Server. The total estimated time for finishing the process seems to be around 48 hours. I really need to know if I lock my screen and leave my desk for around two days (a couple hours after the estimated finish time of the process), does the system stall the processes or turn it idle or etc!?! I just hope my PC does not get disconnected from to the FTP server by no means! or if it does (based on some configurations), What do I need to do to keep the execution of my PHP script last until finished completely? I am using Ubuntu 12.04
As long as your system is up, the script will keep running, it will not stop as long as you don't log out of the system. Locking your screen will not stop the script.
keep script running while screen is locked for a long time
1,373,228,270,000
How do I set/change the default ftp root folder for a specific user? I want to be able to create a developer account that homes to different sites on a development box depending on what is currently being worked on. EDIT: The server is running Ubuntu and vsftpd.
If you specify the user_config_dir in vsftpd.conf, you can set any config option on a per-user basis. From man vsftpd.conf: This powerful option allows the override of any config option specified in the manual page, on a per-user basis. Usage is simple, and is best illustrated with an example. If you set user_config_dir to be /etc/vsftpd_user_conf and then log on as the user "chris", then vsftpd will apply the settings in the file /etc/vsftpd_user_conf/chris for the duration of the session. So, setting local_root in this way to the desired directory changes the FTP root for just that user.
How do I set the default ftp root folder for an Ubuntu user connecting to VSFTPD?
1,373,228,270,000
I'm trying to connect to a FTP server behind a firewall that allows incoming connections in the range 6100-6200 only. I have successfully connected to this server using curl like this: curl --ftp-port :6100-6200 --list-only ftp.server But I'd like to reproduce the behaviour of this curl command with other clients that are more friendly to use from Python. In principle Linux's ftp, but I'm open to other options if someone suggest a good one. I tried ftplib but it seems that this library does not allow you to select ports; I've tried it unsuccessfully. Currently I can not make it work with ftp: 230 Login successful. Remote system type is UNIX. Using binary mode to transfer files. ftp> passive Passive mode on. ftp> ls 227 Entering Passive Mode (XXX,XXX,XXX,XXX,202,251). ftp: connect: Connection refused The same set of commands work from my laptop, therefore it seems clear that the problem is the firewall. How can I force ftp to negociate a data connection in a port in the range 6100-6200, so emulating the behaviour of curl?
When you use FTP in passive mode, the server tells the client which (server-side) data port to use. The well-known FTP protocol includes no way for the client to express requests on which port range to use at the server end. There could be some extensions that could change that, but those are not necessarily widely supported. In your example, the message 227 Entering Passive Mode (XXX,XXX,XXX,XXX,202,251). comes directly from the FTP server, as it's telling the client: "I'm listening for a data connection from you at IP address XXX.XXX.XXX.XXX, port 51683" (= 202*256 + 251). Each TCP connection has two port numbers: a local port number and a remote port number. Usually, an outgoing connection just picks the first free local port in the OS-specified range of ports to be used for outgoing connections, and the remote port is specified according to the service that's being used. In case of passive FTP, the server will pick the remote port according to its configuration and will tell it to the client in the form of a FTP 227 response. There are generally two ways to handle passive FTP in firewalls: a) The firewall and the FTP server need both be configured in cooperation to accept/use a specific range of ports for passive FTP data connections, so the server won't even try to select a port the firewall is not going to let through, or b) the firewall needs to listen in on the FTP command channel traffic, determine the port numbers used for each data connection and dynamically allow passive FTP data connections between the FTP client and server using the port numbers declared on the command channel. If you are using Linux iptables/netfilter firewall, this is exactly what the protocol-specific conntrack extension module for FTP does. You'll just need to tell it what control connections it's allowed to listen to, since the previous policy of listening on all FTP control connections passing through the firewall system turned out to be exploitable by bad guys, and now such extensions will no longer be used automatically. For details, see this page or this question here on U&L SE. curl actually uses FTP in passive mode by default, but when you use the --ftp-port option it switches to active mode. From the man page (highlight mine): -P, --ftp-port (FTP) Reverses the default initiator/listener roles when connecting with FTP. This option makes curl use active mode. curl then tells the server to connect back to the client's specified address and port, while passive mode asks the server to setup an IP address and port for it to connect to. Regarding Python and ftplib, note that the question you referred to is more than 10 years old, and there's now a new answer added by Marcus Müller: Since Python 3.3, ftplib functions that establish connections take a source_addr argument that allows you to do exactly this.
Setting which ports to use for passive FTP connection with Linux's ftp client
1,373,228,270,000
I'm trying to synchronise one file (among other things) using lftp. Even though the docs say that --file=FILE mirror a single file or globbed group (e.g. /path/to/*.txt) lftp still seems to synchronise all the files of the directory of the passed file. I'm running this command: lftp -c "set cmd:fail-exit true; set ftp:ssl-allow no; open gocamping; mirror --reverse --no-perms --exclude=CVS/ --exclude=.cvsignore --delete --verbose=1 --file='/vol/Grozs/Manas vietnes/gocamping/vietne_050011/www/discounts/aaa.php' --target-directory=~/web/discounts;" In directory /vol/Grozs/Manas vietnes/gocamping/vietne_050011/www/discounts/ there are also files ooo.php and uuu.php, and all three of them get transferred. What am I doing wrong?
Use -i instead. The excludes are not needed. mirror --reverse --no-perms --delete --verbose=1 -i aaa.php /vol/Grozs/Manasvietnes/gocamping/vietne_050011/www ~/web/discounts
How to synchronise a specific file to a remote FTP server using lftp?
1,373,228,270,000
I just finished installing a test web server with Debian Wheezy, apache and phpPgAdmin. I don't know if the default installation includes the ftp server. How can I check if one is installed? What do I need to do to start it?
There are several different FTP servers packaged within Debian, which you can see via: apt-cache search ftp-server One of the most popular servers around is proftpd, and that can be installed upon Debian systems with: apt-get install proftpd Once downloaded debconf will ask if you wish to run the server via inetd, or in a standalone fashion. In general you want the latter option. After the installation the server will be running, and will grant access to all user accounts upon the host. If you wish to stop the server prior to more configuration you can do so with: /etc/init.d/proftpd stop The configuration of proftpd is conducted via the configuration file of /etc/proftpd.conf
Is there a default ftp server in Debian Wheezy default installation?
1,427,300,495,000
I have an ftp server running, and it irregularly generates the latest file. The file is store as: Home->T22:30:10->new.txt, and the latest one would be (a new folder) Home->T23:10:25->new.txt (note that this is a new folder with latest time) I need to implement something (it could be anything, C code, Bash script, etc.) in a Linux machine that pulls the latest file over. I have looked into two options: Use libcurl, pass the directory listing, and select the latest file. This is really annoying ass and time-consuming, and I still can't find a easy way to do this. Use lftp, at initiazation, remove all the files in the server, so that each time when I call lftp to download something, it would be the latest one. (This method is only conceptual and I haven't tried it in real life). Is there any easier option?
An approach that's often convenient is to mount the files, and then access them like you would access ordinary local files. For a server that you access through FTP, you can use CurlFtpFS. mkdir theserver curlftpfs theserver.example.com theserver You'll need to pass the username and password to curlftpfs, either on the command line (which is unsafe as other users on your machine would be able to see them) or in the file ~/.netrc (strongly recommended). Here's a sample netrc line: machine theserver.example.com login remoteusername password swordfish Now that you've mounted the FTP server as a directory on your machine, you can use the usual commands such as ls, cp, etc. For example, to copy the file from the directory that comes last in lexicographic order (should be ok if your file names actually contain the date before the time): set -- theserver/remote/path/T* eval "last=\${$#}" cp -p -- "$last/new.txt" "/some/where/local/${last##*/}.txt" Or to copy the latest file, assuming that the filenames involved don't contain unprintable characters or newlines: cd theserver/remote/path last=$(ls -t -- T*/new.txt | head -n 1) cp -p -- "$last" "/some/where/local/${last%/*}.txt"
FTP: get the latest file in server
1,427,300,495,000
Does anyone know which character encoding is used in .netrc? As a German user, I sometimes put umlauts in passwords. So my question is, are these treated as latin1 or may I assume that I am doing fine encoding the file in UTF-8?
There's no applicable standard for .netrc, which started off in the 1980s as a BSD feature. That of course predates UTF-8. However, since there are no standards, you may be using someone's "improvement" which assumes (a) locale-based encoding or (b) UTF-8. Don't count on it. The password used for ftp is covered by an RFC (again, predating UTF-8), and says the content is ASCII. In section 5.3.2, the relevant part of the BNF is <password> ::= <string> <string> ::= <char> | <char><string> <char> ::= any of the 128 ASCII characters except <CR> and <LF> The later RFCs build on RFC 959, but don't change the rules. However, implementers stretch things, so you're able to use Latin1. Further reading: RFC 5797: FTP Command and Extension Registry RFC 2228: FTP Security Extensions RFC 959: FILE TRANSFER PROTOCOL (FTP) (defines passwords in ASCII) Emacs bug#22081: 24.5; netrc.el fails parsing authinfo items spread over multiple lines Solaris manpage for .netrc (1990) using .netrc with sftp
Character encoding of .netrc
1,427,300,495,000
I'm trying to access my ftp server through the browser but I am facing an issue. If I access it using Filezilla, I can see the directories fine. However using a browser I get no directory listing. I have added this entry to iptables hoping to solve the problem but it didn't change: -A INPUT -m state --state NEW -m tcp -p tcp --dport 21 -j ACCEPT
The simple solution is to switch Filezilla to use passive mode when connecting to that FTP server. When you connect to FTP you use port 21 which is known as the control channel. This is the connection used to send commands and receive notifications on the results of the commands issued. Note that for file listings (the output to LIST), file transfer (fetching files RETR and putting files STOR) and other operations that actually requires transmission of data, a separate channel known as the the data channel is created. FTP can operate this data channel in either of two modes, active or passive. Both of these refer to how the data channel is established. You issue a LIST (in active mode) CLIENT FIREWALL FTP SERVER _ (port 21) _ __ _ (port 21) |=| ---------- LIST command ---------> |=| [__]|=| ----- LIST command issued -- > |=| |=| /::/|_| . |_| <----- FTP server attempts to ---- |_| `\___ now listening on arbitrary connect and gets denied port for data channel say 8000 by firewall In active mode the client advertises an arbitrary listening port it creates and the FTP Server connects to this advertised address and port on the client machine. This is normally where firewalls block the traffic because it is to a random (often changing) high order port number on the client host advertising the FTP data channel. Filezilla defaults to using a port in between 6000 and 7000. If the firewall does not block this connection the output of the LIST command is then transferred over this separate channel. You issue a LIST (in passive mode) CLIENT FIREWALL FTP SERVER _ (port 21) _ __ _ (port 21) |=| ---------- LIST command ---------> |=| [__]|=| --- LIST command issued -- > |=| |=| /::/|_| |_| <--- FTP server advertises the --- |_| listening data port over \___ server with the control channel listening data channel CLIENT FIREWALL FTP SERVER _ (port 21) _ __ _ <--- open control channel ---> |=| ---- open control channel ----> |=| [__]|=| |=| |=| /::/|_| --- client establishes a ----> |_| ---- passive data channel ----> |_| connection to the connection allowed advertised data channel by firewall In passive mode, the roles are reversed, the FTP client issues a PASV command before the LIST command. The FTP Server then creates a listening TCP port and advertises this for the client to connect to to establish the data channel. This is usually allowed by most firewalls (as clients can make connections outbound to any port). Note that if there is a firewall in between your FTP server and the internet, this firewall ALSO has to be configured to have these ports opened to allow the passive connections. Most FTP servers provide the ability to set the range in which these ports will be advertised and these can be opened to allow those connections. If you are restricted and have a client that cannot do passive, Filezilla offers the ability (uned Edit -> Settings... -> Connection/FTP/Active Mode to set which ports to use), and you can then add these to your firewall.
vsFTPd Browser no Listing
1,427,300,495,000
As we know we can add folder to user using useradd user -p user_passwd -d /home/ftp/user_dir/ -s /bin/false but how to add folder to existing user, for example /home/ftp/root for root ?
Basically you don't add, you change home directory. usermod -d /home/ftp/root root if you want to move existing files, use this: usermod -d /home/ftp/root -m root Allowing root to access via FTP it not good practice, it's security hole. Even if this, I would rather recommend to create symlink to target folder from existing directory.
How to add folder to existing user [proftpd]
1,427,300,495,000
Response: 220-This is a private system - No anonymous login Response: 220 You will be disconnected after 60 minutes of inactivity. Command: AUTH TLS Response: 234 AUTH TLS OK. Status: Initializing TLS... Error: GnuTLS error -50: The request is invalid. Error: Failed to initialize TLS. Error: Could not connect to server I'm trying to connect to a site using FTPES. It worked before on Fedora/Filezilla. But now I'm using Scientific-Linux with Filezilla, and it gives this. What am I missing?
With FileZilla 3.5.2 it works perfectly. With FileZilla 3.5.3 it produces the error message above. So it's a bug AFAIK.
Filezilla: GnuTLS error when using FTPES
1,427,300,495,000
While using FTP we can use the option bi to transfer files in binary mode however I am unable to find similar option in SFTP. Please find my code snippet below.. fileTransferToDEST() { echo "mput $4/$1 $3/" | sftp -b - $SRV_USER@$DEST_IP } fileTransferToDEST $filename $logpathwithfilename $destinationpath $sourcepath returnvalue=$? if [ "$returnvalue" != "0" ]; then echo;echo "FTP : Failed while transfering" exit 2 fi Please advise. Thanks in advance.
OpenSSH sftp supports the binary mode only. So it's implicit. See also How to transfer binary file in SFTP? (on Stack Overflow)
Transferring files via Binary mode in SFTP
1,427,300,495,000
We have a daily process which moves file from 1 server to another via FTP using a script. Please find the snippet below: fileTransferToDEST() { ftp -inv $DEST_IP 1>$2 <<END_SCRIPT quote USER $SRV_USER quote PASS $SRV_PASS lcd $4 cd $3 bi prompt hash mput $1 quit END_SCRIPT } fileTransferToDEST $filename $logpathwithfilename $destinationpath $sourcepath returnvalue=$? FtpStatus=`grep "Transfer complete" $logpathwithfilename` if [ "$FtpStatus" = '' -o "$returnvalue" != "0" ]; then echo;echo "FTP : Failed while transfering" exit 2 fi I have been assigned to convert the FTP script to use SFTP. I have successfully finished all the necessary steps to have passwordless login in SFTP. please find the script using SFTP below: fileTransferToDEST() { sftp $SRV_USER@$DEST_IP 1>$2 <<END_SCRIPT lcd $4 cd $3 mput $1 quit END_SCRIPT } fileTransferToDEST $filename $logpathwithfilename $destinationpath $sourcepath returnvalue=$? FtpStatus=`grep "Transfer complete" $logpathwithfilename` if [ "$FtpStatus" = '' -o "$returnvalue" != "0" ]; then echo;echo "FTP : Failed while transfering" exit 2 fi However I am unable to check/find how to successfully check if the file has 100% been transferred to the destination. How can I achieve this? Code after applying -b ...Based on answer.. fileTransferToDEST() { echo "mput $4/$1 $3/" | sftp -b - $SRV_USER@$DEST_IP } fileTransferToDEST $filename $logpathwithfilename $destinationpath $sourcepath returnvalue=$? if [ "$returnvalue" != "0" ]; then echo;echo "FTP : Failed while transfering" exit 2 fi
OpenSSH sftp indicates its results using an exit code (what you are going already). If it returns 0, everything went fine. If it returns 1, there was a problem. No need to parse the output for an arbitrary message. Just execute it in a batch mode, so that it aborts on any error. Use -b - switch for that (the - indicates that you still want to provide the commands using stdin, not via a file, which would normally follow the -b).
Check if the file has been successfully transferred to the destination with SFTP
1,427,300,495,000
I have been asked by the data owner to copy a specific folder (and its large amount of subfolders and files) via FTPS to our cloud storage provider. I am using LFTP for that, and the upload worked well until I hit a snag. There are several folders with multiple files that have the same filename except for case. For example, folder data has the following files: testfile1.txt, TestFile1.TXT When I try to upload those via LFTP, I get an error that the file already exists. So for my purposes, I need the files to be case-insensitive before uploading. To address this issue, I would like to use a script that searches the current directory recursively, and moves any case-insensitive duplicates to a subfolder. In my example above, I would want the script to create a subfolder called Duplicates and then move TestFile1.TXT into it. I suppose it's possible that I could have multiple duplicated filenames, so the script should create a Duplicates2 folder for the second duplicated filename, and so on. Also, I should note that for the few "duplicated" files that I checked, they had differing filesizes. I am not going to make any assumptions about the files being actual duplicates, which is why I want to move them rather than delete them.
The bash script below loops through the files in the current directory, looking for duplicate filenames case insensitively. If a match is found, it looks to create a "Duplicates" folder that doesn't exist already, then moves the duplicate file into that directory. The outer loop is there in order to re-compute the file globs (*) for the loops, in case a file gets moved. The outer loop runs until no files are moved. #!/bin/bash changes=1 while [ $changes -gt 0 ] do changes=0 for one in * do for two in * do shopt -u nocasematch # if it's the exact same filename, skip [[ "$one" == "$two" ]] && continue shopt -s nocasematch # if the file name matches case-insensitively, then mv it if [[ "$one" == "$two" ]] then suffix= while [ -d Duplicates"${suffix}" ] do suffix=$((suffix + 1)) done mkdir Duplicates"${suffix}" mv "$two" Duplicates"${suffix}" changes=1 break fi done done done With these sample files: afile.txt TestFile1.TXT TESTfile1.txT testfile1.txt A sample run of the script creates: $ tree . . ├── afile.txt ├── Duplicates │   └── TestFile1.TXT ├── Duplicates1 │   └── testfile1.txt └── TESTfile1.txT
Move files that have the same case-insensitive filename
1,427,300,495,000
I have been at this a few times and have become completely stuck in configuring this. Currently, I have this set up for an FTP server: Local PC (My MAC) -> AWS EC2 (HaProxy) -> FTP Server Here's a few things I have done so far: Confirmed Ports 20/21 are open in Security Groups Configured in SGs that TCP ports are opened (for passive ports). Confirmed via netstat that the appropriate ports are opened ( as a sanity check, I got HaProxy to bind to 0-65535 TCP ports (same for SGs). Ran TCPDump while trying to connect to the FTP server via the proxy Here is a copy of my haproxy.cfg listen FTP :21, :10000-10250 mode tcp log 127.0.0.1 local0 debug info warning error bind *:21 bind *:20 server foo xx.xx.xx.xx:21 check port 21 When I run tcpdump, I get: 10:54:11.695983 IP 10.33.x.x.56374 > ec2-54-234-x-x.compute-1.amazonaws.com.ftp: Flags [P.], seq 45:51, ack 684, win 16384, options [nop,nop,TS val 652929632 ecr 148377527], length 6: FTP: EPSV 10:54:11.726462 IP ec2-54-234-x-x.compute-1.amazonaws.com.ftp > 10.33.x.x.56374: Flags [P.], seq 684:726, ack 51, win 227, options [nop,nop,TS val 148377680 ecr 652929632], length 42: FTP: 229 Extended Passive mode OK (|||12612|) 10:54:11.726513 IP 10.33.x.x.56374 > ec2-54-234-x-x.compute-1.amazonaws.com.ftp: Flags [.], ack 726, win 16378, options [nop,nop,TS val 652929662 ecr 148377680], length 0 10:55:27.134823 IP 10.33.x.x.56374 > ec2-54-234-x-x.compute-1.amazonaws.com.ftp: Flags [P.], seq 51:80, ack 726, win 16384, options [nop,nop,TS val 653004663 ecr 148377680], length 29: FTP: EPRT |1|10.33.x.x|56397| 10:55:27.165241 IP ec2-54-234-x-x.compute-1.amazonaws.com.ftp > 10.33.x.x.56374: Flags [P.], seq 726:798, ack 80, win 227, options [nop,nop,TS val 148396540 ecr 653004663], length 72: FTP: 500 I won't open a connection to 50.232.x.x (only to 54.234.x.x) 10:55:27.165287 IP 10.33.x.x.56374 > ec2-54-234-x-x.compute-1.amazonaws.com.ftp: Flags [.], ack 798, win 16375, options [nop,nop,TS val 653004693 ecr 148396540], length 0 10:55:27.165377 IP 10.33.x.x.56374 > ec2-54-234-x-x.compute-1.amazonaws.com.ftp: Flags [P.], seq 80:106, ack 798, win 16384, options [nop,nop,TS val 653004693 ecr 148396540], length 26: FTP: PORT 10,33,x,x,220,77 10:55:27.194982 IP ec2-54-234-x-x.compute-1.amazonaws.com.ftp > 10.33.x.x.56374: Flags [P.], seq 798:870, ack 106, win 227, options [nop,nop,TS val 148396547 ecr 653004693], length 72: FTP: 500 I won't open a connection to 50.232.x.x (only to 54.234.x.x) 10:55:27.195025 IP 10.33.x.x.56374 > ec2-54-234-x-x.compute-1.amazonaws.com.ftp: Flags [.], ack 870, win 16375, options [nop,nop,TS val 653004722 ecr 148396547], length 0 10:55:27.195304 IP 10.33.x.x.56374 > ec2-54-234-x-x.compute-1.amazonaws.com.ftp: Flags [P.], seq 106:112, ack 870, win 16384, options [nop,nop,TS val 653004722 ecr 148396547], length 6: FTP: LIST 10:55:27.224432 IP ec2-54-234-x-x.compute-1.amazonaws.com.ftp > 10.33.x.x.56374: Flags [P.], seq 870:894, ack 112, win 227, options [nop,nop,TS val 148396555 ecr 653004722], length 24: FTP: 425 No data connection 10:55:27.224464 IP 10.33.x.x.56374 > ec2-54-234-x-x.compute-1.amazonaws.com.ftp: Flags [.], ack 894, win 16381, options [nop,nop,TS val 653004751 ecr 148396555], length 0 I configured logging on HaProxy and currently have this (not sure if this is helpful): Jun 12 14:49:25 localhost haproxy[18805]: 50.232.x.x:14857 [12/Jun/2017:14:48:24.811] FTP FTP/media 1/14/60772 792 -- 0/0/0/0/0 0/0 Jun 12 14:49:25 localhost haproxy[18805]: 50.232.x.x:14857 [12/Jun/2017:14:48:24.811] FTP FTP/media 1/14/60772 792 -- 0/0/0/0/0 0/0 Jun 12 14:51:48 localhost haproxy[18805]: 50.232.x.x:56254 [12/Jun/2017:14:49:41.820] FTP FTP/media 1/13/127012 960 -- 0/0/0/0/0 0/0 Jun 12 14:51:48 localhost haproxy[18805]: 50.232.x.x:56254 [12/Jun/2017:14:49:41.820] FTP FTP/media 1/13/127012 960 -- 0/0/0/0/0 0/0 Jun 12 14:52:02 localhost haproxy[18805]: 50.232.x.x:56342 [12/Jun/2017:14:51:54.618] FTP FTP/media 1/14/7670 401 -- 1/1/1/1/0 0/0 Jun 12 14:52:02 localhost haproxy[18805]: 50.232.x.x:56342 [12/Jun/2017:14:51:54.618] FTP FTP/media 1/14/7670 401 -- 1/1/1/1/0 0/0 Jun 12 14:53:36 localhost haproxy[18805]: 50.232.x.x:56345 [12/Jun/2017:14:51:59.386] FTP FTP/media 1/19/97481 1032 -- 0/0/0/0/0 0/0 Jun 12 14:53:36 localhost haproxy[18805]: 50.232.x.x:56345 [12/Jun/2017:14:51:59.386] FTP FTP/media 1/19/97481 1032 -- 0/0/0/0/0 0/0 Jun 12 14:53:59 localhost haproxy[18805]: 50.232.x.x:56373 [12/Jun/2017:14:53:37.880] FTP FTP/media 1/14/21641 333 -- 0/0/0/0/0 0/0 Jun 12 14:53:59 localhost haproxy[18805]: 50.232.x.x:56373 [12/Jun/2017:14:53:37.880] FTP FTP/media 1/14/21641 333 -- 0/0/0/0/0 0/0 I can connect to the FTP Server and cd to directories. I cannot run an ls/put/get on there though. When I try on passive mode, I get: ftp> ls 229 Extended Passive mode OK (|||12612|) ftp: Can't connect to `54.234.x.x': Operation timed out 500 I won't open a connection to 50.232.x.x (only to 54.234.x.x) 425 No data connection Running on Active mode: Passive mode: off; fallback to active mode: off. ftp> ls 500 I won't open a connection to 50.232.x.x (only to 54.234.x.x) 425 No data connection I don't believe this to be an issue particularly with the FTP servers only because I can connect to it via the proxy. Even when I connect to the FTP servers directly, I can run commands on active and passive mode. My questions are: What else can I look into that can further troubleshoot this? Is there any further logging configurations I can do that would enhance logging for any kind of TCP requests other than what I currently have? Is there something configuration wise I am missing in HaProxy (I know that natively HaProxy is an HTTP load balancer but I looked up enough examples where I can configure it to be used as a proxy for FTP servers and I cannot reconfigure the FTP servers to be ran as SFTP)?
After much research and some investigation with my colleague on this, this is what was happening: HaProxy could have been a viable solution. But, the FTP server is configured to force passive mode. HaProxy can be used to proxy FTP connections but there is no real way to configure it to react accordingly to Passive connections. As a result, I was unable to run the ls/get/put (and probably even more) commands! My Solution was to go to ftp-proxy and configure it. Once I configured ftp-proxy to route traffic to the FTP server (while configuring it to listen to passive connnections), VIOLA! It worked!!
HaProxy 1.5.8 to FTP server
1,427,300,495,000
I've created a user 'www' and added it to the 'www-data' group. I've set the home directory of 'www' to /var/www/ also. I would like to use 'www' to transfer files in and out of my web server by FTP The problem is when I run the command: sudo chown -R www-data:www-data /var/www/ ..I don't have permission to write files via FTP However when I run: sudo chown -R www:www /var/www ..I have full FTP access but get a 'Forbidden' message in my browser. Any advice on how to get full FTP access including all subfolders would be really appreciated.
That means that you already have a www-data user which Apache uses that should have the necessary permissions in /var/www. The simplest solution would be to use that same user, but you could also assign the www-data group to your new user and make sure the /var/www directory structure allows the group to write to it: chown -R www-data:www-data /var/www chmod -R ug+rw /var/www
Ubuntu server 16.04 - Get full FTP access to /var/www/
1,427,300,495,000
I've been trying to configure my FTPS server which is behind NAT. so I've opened ports 20, 21 as well as 2120-2180 in my NAT (TCP+UDP) and configured proftpd to use this ports for passive communications. However, trying to connect using FileZilla leads to the following log: (in french, but quite clear actually) Statut : Résolution de l'adresse de heardrones.com Statut : Connexion à 93.30.208.56:21... Statut : Connexion établie, attente du message d'accueil... Réponse : 220 ProFTPD 1.3.5 Server (HEAR Server) [93.30.208.56] Commande : USER hear_downloader Réponse : 331 Mot de passe requis pour hear_downloader Commande : PASS ******** Réponse : 230 Utilisateur hear_downloader authentifié Commande : OPTS UTF8 ON Réponse : 200 UTF-8 activé Statut : Connecté Statut : Récupération du contenu du dossier... Commande : PWD Réponse : 257 "/" est le répertoire courant Commande : TYPE I Réponse : 200 Type paramétré à I Commande : PASV Erreur : Délai d'attente expiré Erreur : Impossible de récupérer le contenu du dossier It times out before even being capable of sending the "PASV" answer ! What could cause this ? The answer to PASV command uses the same port as all other commands (PWD, TYPE ...), so where could it come from ? Here is the network design : Server Proftpd, no iptables, fix IP 192.168.0.13 -> (Wifi) ISP Box - French ISP (SFR) port transfer 20,21,22,2120-2180 to 192.168.0.13 -> (optic fiber !) Internet I can give Box settings screenshots and proftpd config files if needed. Connecting from LAN/Localhost works perfectly.
FTP is a horrible protocol. It uses two ports -- one for commands, one for data. This makes it notoriously difficult to NAT, since a router would need to parse the command channel and figure out that a second connection is expected for this FTP conversation. Doing so is ugly, but also the only way to make NAT work with FTP. FTPS encrypts the command channel, thereby making it impossible for any router to inspect the packets and figure out where the data channel is going to be. Obviously that means it won't be able to account for that then; so when your data channel is initiated by the client (as required by PASV) your NATting router won't know what to do with it. It is not possible to fix this, due to the way FTP works. Just say no to FTP, and use SFTP or something of the sort instead (which transfers files over an SSH tunnel and therefore requires only one TCP connection). Most graphical FTP clients have support for SFTP too, these days.
Proftpd doesn't answer to "PASV" command
1,427,300,495,000
I need a FTP software that I can use in Debian as well as be able to distribute it when I remaster Debian into my own distro. I need it to be able to do both text and graphical but the text based part needs to be optional for the user.
Suggestion: gFTP It has both text and graphical interface. See the license, it can be redistributed.
Good FTP software for Debian desktop for home networks [closed]
1,427,300,495,000
I'm having an issue where when I transfer a Python file to my VPS via FTP and try to run it using ./foo.py I am returned with the error: : No such file or directory. The error seems to indicate that the file I am trying to execute does not exist. But I can run the program with no problems using python foo.py which leads me to believe that the error actually probably means something else. At first I thought it could be an issue with the shebang line, so I copied all of the content of the file and pasted it into a new file on the VPS that had not been transferred via FTP. The two files had exactly the same content but when I ran the new file using ./bar.py it ran as expected. So I've come to the conclusion that this could be an issue with the way that it is transferred. I have switched between ASCII and binary but both of these transfer methods give the same error. Is it possible to stop this from happening?
This happens when a file contains \r\n as a line terminator instead of \n, since \r is a C0 control code meaning "go to the beginning of the current line". To fix, run dos2unix foo.py. Example session: ben@joyplim /tmp/cr % echo '#!/usr/bin/env python' > foo.py ben@joyplim /tmp/cr % chmod +x foo.py ben@joyplim /tmp/cr % ./foo.py ben@joyplim /tmp/cr % unix2dos foo.py unix2dos: converting file foo.py to DOS format ... ben@joyplim /tmp/cr % ./foo.py : No such file or directory ben@joyplim /tmp/cr % ./foo.py 2>&1 | xxd 0000000: 2f75 7372 2f62 696e 2f65 6e76 3a20 7079 /usr/bin/env: py 0000010: 7468 6f6e 0d3a 204e 6f20 7375 6368 2066 thon.: No such f 0000020: 696c 6520 6f72 2064 6972 6563 746f 7279 ile or directory 0000030: 0a . Specifically note the 0d3a in the output.
Why does trying to run a python executable return ': No such file or directory' after transferring it to server via FTP? [duplicate]
1,427,300,495,000
A vendor has provided these FTP connection params so I can upload some data for them... Host: host.com Port: 46800 Protocol: FTP – File Transfer Protocol Encryption: Require implicit FTP over TLS Logon Type: Normal User: [ username ] Password: [ password ] It isn't working for me... $ ftp -p host.com 46800 Connected to host.com 421 Service not available, user interrupt. Connection closed. ftp> I suspect the "Require implicit FTP over TLS" param might be the issue? (Maybe?) TLS isn't mentioned in the FTP man page. What would be a command that would allow me to connect and upload?
The ftp program is for the insecure ftp protocol. Your vendor has specified that you use Implicit FTP over TLS which is a way to encrypt the connection and keep your credentials and data private over the Internet. Fortunately, there is a program called lftp which understands this protocol. lftp open -u [username] ftps://host.com:46800 Password: [enter your password] ls [your remote files should be listed] lftp supports many protocols. This webpage lists them in an easy to read table.
How can I connect and upload to this FTP host on the console?
1,427,300,495,000
I have Linux machine red-hat 5.X please advice - which what command I can identify if someone is tiring to copy files from my machine VIA sftp or ftp is it possible to verify this on my Linux machine ? thx
Sure you can use lsof to see what activity is currently taking place on the server. Here's what the output would look like for an idle connection to an SFTP server. $ sudo /usr/sbin/lsof -p $(pgrep sftp) COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME sftp-serv 30268 sam cwd DIR 0,19 20480 28312529 /home/sam (mulder:/export/raid1/home/sam) sftp-serv 30268 sam rtd DIR 253,0 4096 2 / sftp-serv 30268 sam txt REG 253,0 51496 48727430 /usr/libexec/openssh/sftp-server sftp-serv 30268 sam mem REG 253,0 109740 46368404 /lib/libnsl-2.5.so sftp-serv 30268 sam mem REG 253,0 613716 48382913 /usr/lib/libkrb5.so.3.3 sftp-serv 30268 sam mem REG 253,0 1205988 48387619 /usr/lib/libnss3.so sftp-serv 30268 sam mem REG 253,0 33968 48377969 /usr/lib/libkrb5support.so.0.1 sftp-serv 30268 sam mem REG 253,0 15556 48387614 /usr/lib/libplc4.so sftp-serv 30268 sam mem REG 253,0 11524 48387615 /usr/lib/libplds4.so sftp-serv 30268 sam mem REG 253,0 190712 48383685 /usr/lib/libgssapi_krb5.so.2.2 sftp-serv 30268 sam mem REG 253,0 1706232 46368382 /lib/libc-2.5.so sftp-serv 30268 sam mem REG 253,0 50848 46367899 /lib/libnss_files-2.5.so sftp-serv 30268 sam mem REG 253,0 46624 46367905 /lib/libnss_nis-2.5.so sftp-serv 30268 sam mem REG 253,0 1298276 46368392 /lib/libcrypto.so.0.9.8e sftp-serv 30268 sam mem REG 253,0 232156 48387613 /usr/lib/libnspr4.so sftp-serv 30268 sam mem REG 253,0 45432 46368394 /lib/libcrypt-2.5.so sftp-serv 30268 sam mem REG 253,0 121324 48387616 /usr/lib/libnssutil3.so sftp-serv 30268 sam mem REG 253,0 75088 46368385 /lib/libz.so.1.2.3 sftp-serv 30268 sam mem REG 253,0 137944 46368395 /lib/libpthread-2.5.so sftp-serv 30268 sam mem REG 253,0 15308 46368401 /lib/libutil-2.5.so sftp-serv 30268 sam mem REG 253,0 20668 46368384 /lib/libdl-2.5.so sftp-serv 30268 sam mem REG 253,0 130860 46368381 /lib/ld-2.5.so sftp-serv 30268 sam mem REG 253,0 157336 48382170 /usr/lib/libk5crypto.so.3.1 sftp-serv 30268 sam mem REG 253,0 93508 46368390 /lib/libselinux.so.1 sftp-serv 30268 sam mem REG 253,0 233296 46368389 /lib/libsepol.so.1 sftp-serv 30268 sam mem REG 253,0 7812 46368391 /lib/libcom_err.so.2.1 sftp-serv 30268 sam mem REG 253,0 84904 46368388 /lib/libresolv-2.5.so sftp-serv 30268 sam mem REG 253,0 7880 46368387 /lib/libkeyutils-1.2.so sftp-serv 30268 sam 0u unix 0xcb014040 0t0 104100868 socket sftp-serv 30268 sam 1u unix 0xcb014040 0t0 104100868 socket sftp-serv 30268 sam 2u unix 0xd8077580 0t0 104100870 socket sftp-serv 30268 sam 3u unix 0xcb014040 0t0 104100868 socket sftp-serv 30268 sam 4u unix 0xcb014040 0t0 104100868 socket Now when some files are currently being copied from the server: $ sudo /usr/sbin/lsof -p $(pgrep sftp) COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME sftp-serv 30268 sam cwd DIR 0,19 20480 28312529 /home/sam (mulder:/export/raid1/home/sam) ... sftp-serv 30268 sam 5r REG 0,19 3955027 9257067 /home/sam/which witch is wich-dDSr2oxZeAM.mp3 (mulder:/export/raid1/home/sam) The line that shows me copying the file which witch is wich-dDSr2oxZeAM.mp3 is at the bottom of the output. When I use SFTP to put a file it shows up like this: sftp-serv 30268 sam 5r REG 0,19 1933312 9257073 /home/sam/bob.mp3 (mulder:/export/raid1/home/sam) See a difference? Me neither, so this method can only tell you whether a file is currently being accessed via a put or get but it cannot distinguish between the two. However this will tell you if the connection is "active" in the sense if there's a file being read/written from/to the SFTP server. Watching the daemon I typically use this method when I want to watch the SFTP server. $ sudo watch "/usr/sbin/lsof -p $(pgrep sftp)" This will run the lsof command every 2 seconds, "polling" it for any activity. Multiple connections If you have more than 1 user connecting at a time, you may need to modify the $(pgrep sftp) and pick a specific PID, if there are multiple sftp-server instances. Also you'll have to identify which user is accessing the files via SFTP. For that though, you can look at the "USER" column in the lsof output.
how to know on my linux machine if connection VIA sftp is active
1,427,300,495,000
I know how can I change the umask settings for all the users using ftpd, but how can I change the umask for a given user using FTPD? AIX/6100-05-02-1034 FTP server (Version 4.2)
The AIX ftpd doesn't provide per-user umask settings. You can set a global umask value with the -u switch, but neither the command line nor the configuration file allow you to set this on a per-user basis. Note that you can set the umask, and run chmod, from the client side with SITE commands: SITE UMASK 002 SITE CHMOD 600 your_file
AIX ftpd - how to set umask for a given user?
1,427,300,495,000
I have a user account rootftp which two people use every once in awhile to ftp into a machine, I believe it is using Solaris and running as a Xerox freeflow print server. The physical machine is a Xerox N-Series, Reg Model D01D, Reg Type D01D001. The two users are running windows 10 if that affects anything. Say we have the two following paths: Path 1 = /var/somewhere/somewheredeeper Path 2 = /var/somewhere When either of the two users use ftp to get to this machine, they start out in the location of path 1. Where/How can I make a change so they start out in Path 2 when they connect through ftp? They need access to Path 2 but if they start out in Path 1 they cannot go up a level higher into Path 1. I know next to nothing about linux, but I am very slightly familiar with bash and using a terminal. I wasn't able to find this particular information from searching. Thanks for any assistance!
Very likely you just need to update the user's home directory, you could vi /etc/passwd and change /var/somewhere/somewheredeeper to /var/somewhere but you'll then need to update the permissions on /var/somewhere to have ownership and permissions that allow the rootftp user to ftp to that dir
How to setup starting file location for a user using FTP
1,427,300,495,000
I'm trying to run tnftpd on OS X, which is NetBSD's FTP server and used to be OS X's FTP server. I built and installed it from Apple's sources. Unfortunately, it seems that I cannot run the server without root privileges. These have been my approaches so far to get the server to work without root privileges: I've tried changing the port number via the -P option, to ensure that it uses no privileged ports. I've tried fiddling with config files, such as ftpd.conf and ftpusers. I've also tried the -r option (which disallows root privileges once a user logs in). All of these attempts have been to no avail. Some examples to illustrate my attempts: $ ftpd -lnD # exit code is 0, but `ps' shows no server running $ ftpd -lnDr # supposed to drop root privileges, but same as above $ # let's try running on a different port... $ ftpd -lnDr -P 50001 # exit code still 0, but no dice However, if I try something like this (this is a scenario where I have no custom configurations in place): $ sudo ftpd -lnD Password: $ ps aux | grep -i ftpd root 21998 0.0 0.0 4298888 720 ?? Ss 10:41PM 0:00.00 ftpd -lnD I have no problems. How can I run the tnftpd server without root privileges? Is it even possible?
According to the man page tnftpd(8) ... The server uses the TCP protocol and listens at the port specified in the ``ftp'' service specification; see services(5). and a scan through ftpd.conf(5) shows no obvious means to fiddle with the listen port (as opposed to the data port, which is different) so let's see if we can modify the services file, which is probably a bad idea. $ sudo perl -i.oops -pe 's/^(ftp\s+21)/${1}21/' /etc/services $ grep 2121 /etc/services ftp 2121/udp # File Transfer [Control] ftp 2121/tcp # File Transfer [Control] scientia-ssdb 2121/udp # SCIENTIA-SSDB scientia-ssdb 2121/tcp # SCIENTIA-SSDB nupaper-ss 12121/tcp # NuPaper Session Service nupaper-ss 12121/udp # NuPaper Session Service $ And with this horrible, horrible kluge effected we now start ftpd... (this is on a 10.11.6 system which has ftpd installed by default under /usr/libexec) $ /usr/libexec/ftpd -lnDr -P 50001 $ And it is running as not-root at the not-21 port: $ pgrep -lf ftpd 35258 /usr/libexec/ftpd -lnDr -P 50001 $ lsof -P -p 35258 | grep 2121 ftpd 35258 jhqdoe 4u IPv4 0x817b7cd1effd8d7f 0t0 TCP *:2121 (LISTEN) ftpd 35258 jhqdoe 5u IPv6 0x817b7cd1effa3107 0t0 TCP *:2121 (LISTEN) $ Whether this works or not I dunno; do you really need FTP? To undo this change, sudo mv /etc/services.oops /etc/services
How to run tnftpd without root on OS X?
1,427,300,495,000
I am trying to deploy a WordPress website from a test server to a production server. wget seems to be an efficient solution for transferring lots of files between 2 servers via FTP. I connect to the target server, go to the /var/www folder, and I type : wget -r ftp://fred:[email protected]/mywebsite/ I runs 2 minutes and then states that 2312 files have been transferred. Well, but Filezilla would find over 5000 files ! At first, I notice that the .htaccess file was ignored. How does it come not all files have been handled by wget? How can I specify that I need all the files to be transferred?
The default recursion depth limit in wget is 5. This is primarily meant for the web where a large recursion is often a mistake, but the default also applies to FTP. Large recursion could also be a problem with FTP if the server has upward-pointing symbolic links. To make a complete mirror, pass -l -1 to make the recursion unlimited, or better, pass the --mirror option.
Why is wget ignoring some files in ftp transfer?
1,427,300,495,000
OS: CentOS 7 with VSFTPD with firewall turned off for the moment Client: Filezilla VSFTPD Configuration Settings: Used [root@Turbo ~]# vi /etc/vsftpd/vsftpd.conf anonymous_enable=NO local_enable=YES write_enable=YES local_umask=022 dirmessage_enable=YES xferlog_enable=YES connect_from_port_20=YES xferlog_std_format=YES ftpd_banner=Welcome to the DataMover FTP service. chroot_local_user=YES local_root=/home/ftp-docs listen=YES listen_ipv6=NO pasv_enable=YES pasv_max_port=65534 pasv_min_port=1024 pasv_address=192.168.20.88 hide_file=NO pam_service_name=vsftpd userlist_enable=YES tcp_wrappers=YES FileZilla Settings: Logon Type Normal Port 21 Server Type Default (tried UNIX) Transfer Mode Passive (tried Active) Charset Auto detect Encryption Only use plain FTP (insecure) I tried all combinations of transfer mode and server type (between UNIX and auto detect) to no avail. I used plain FTP, as I did not add TLS to my CentOS machine. I had that protocol for a while, as I do not have it, I just wasted time negotiating down to plain FTP. Here is the Filezilla output: Status: Disconnected from server Status: Connecting to 192.168.20.88:21... Status: Connection established, waiting for welcome message... Status: Connected Status: Retrieving directory listing... Status: Directory listing of "/" successful What is interesting is that I do not see the welcome banner. I half expected to see that, as the Filezilla log says "waiting for welcome message...". Like the welcome message, the contents of the directory are blank. The user is fine. I logged in as the user and did a cd ~ and was able to verify that I went to the /home/ftp-docs directory. The log files show major nothingness. /var/log/messages: Dec 11 09:23:35 Turbo systemd: Reloading. Dec 11 09:23:35 Turbo systemd: [/usr/lib/systemd/system/lvm2-lvmetad.socket:9] Unknown lvalue 'RemoveOnStop' in section 'Socket' Dec 11 09:23:35 Turbo systemd: [/usr/lib/systemd/system/dm-event.socket:10] Unknown lvalue 'RemoveOnStop' in section 'Socket' Dec 11 09:23:36 Turbo avahi-daemon[1227]: Invalid response packet from host 192.168.20.74. Dec 11 09:23:37 Turbo avahi-daemon[1227]: Invalid response packet from host 192.168.20.74. Dec 11 09:23:37 Turbo avahi-daemon[1227]: Invalid response packet from host 192.168.20.74. Dec 11 09:23:40 Turbo systemd: Stopping Vsftpd ftp daemon... Dec 11 09:23:40 Turbo systemd: Stopped Vsftpd ftp daemon. Dec 11 09:23:45 Turbo systemd: Starting Vsftpd ftp daemon... Dec 11 09:23:45 Turbo systemd: Started Vsftpd ftp daemon. With all my testing today, the /var/log/xferlog, shows emptiness. Tue Dec 8 15:48:35 2015 1 ::ffff:192.168.20.74 0 /CreativeCloudSet-Up.exe b _ i r datamover ftp 0 * i ~ My command line on the server is fine too. [root@Turbo ~]# systemctl daemon-reload [root@Turbo ~]# systemctl stop vsftpd [root@Turbo ~]# systemctl start vsftpd [root@Turbo ~]# vi /var/log/messages [root@Turbo ~]# vi /var/log/xferlog [root@Turbo ~]# Just for completeness sake, the contents of the home directory are fine. [root@Turbo ~]# ls -l /home total 4 drwxr-xr-x. 3 root ftp-users 35 Dec 9 16:29 ftp-docs ... [root@Turbo ~]# [root@Turbo ~]# cd /home/ftp-docs/ [root@Turbo ftp-docs]# ls -l total 4 -rwxr--r--. 1 datamover root 13 Dec 8 15:47 smurfit.txt drwxr--r--. 2 root root 6 Dec 9 16:29 sub1 [root@Turbo ftp-docs]# I perused quite a few webpages, which is why I added hide_file and the four passive entries into my vsftpd.conf configuration file. Other people had a problem connecting. That is and was not my problem. I connect fine. I just do not see anything. Although I have firewall disabled at the moment, here is my iptables setting: vi /etc/sysconfig/iptables # FTP -A INPUT -m state --state NEW -m tcp -p tcp --dport 20 -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 21 -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 1024:65535 -j ACCEPT I did read that CentOS 7 uses FirewallD, not iptables, but as you can see: [root@Turbo ~]# firewall-cmd --get-active-zones FirewallD is not running [root@Turbo ~]# Here are some of the resources that I perused: FTP FileZilla No route to host Configure vsftpd to work with passive mode VSFTPD does not list content of a directory (my problem) How to install and configure VSFTPD on CentOS 7 Setting VSFTPD to allow user upload Set default ftp root folder Dir list not visible VSFTPD to jail a user Any thoughts on anything that I missed?
The resolution to this problem was so, so not obvious. When I knew the proper search terms, obvious now, but not for the past couple of days, and understanding things better, the pieces started coming into place. Many posts on "VSFTPD directory content not listed" talked endlessly on passive vs. active, ports, permissions, use of hide_file, local_root, and others. The real epiphany that I had came, when I asked myself there must be a way to get detailed logging or "vsftpd verbose logging" was my key term. That one thought and its implementation hit the mother load, so to speak. I got to know the existence of SELinux. That partially caused my misery. This article, Installation and configuration of VSFTPD in CentOS with FTPS support and SELinux, did much to solve my problem. The instructions created a couple of files, mypol.*, which I deleted. That gave me access to smurfs.txt, but I still had a problem with sub1. I resolved that issue by noticing (why did it take me so long?) that the owner for sub1 was root, not datamover. When I fixed ownership (okay, permissions), then I had access to that folder as well. I just tried downloading both files and success. I still cannot upload to /home/ftp-docs/ and must maintain root as the owner for that folder as well as 755 permissions. Any deviation causes FTP to stop working. I can, however, upload to sub1. So the solution (workaround?) is to simply upload to a sub-folder. In order to get things to work, I had to do the following at the end, where /mnt/raid1 is the home directory for the FTP client in this particular case. (I used to use /home/ftp-docs/.) # /sbin/restorecon -v /mnt/raid1 # setsebool -P ftpd_full_access 1 Useful links I used: How to disable SELinux on CentOS How to troubleshoot SELinux Problems Troubleshooting SELinux Issues on CentOS and Red Hat
Directory Contents Not Listed When Connecting To CentOS 7 VSFTPD
1,427,300,495,000
I have read the manual page for wget and search Google for an answer, but haven't been able to get it. How would I use wget to download only folders that contain a specific term in their names from an FTP site that also has multiple other directories without that term in their names?
wget -I /people,/cgi-bin http://host/people/bozo/ -I is followed by a comma separated list of the directories you want to include. However, note that this will just include the full name listed in -I, I believe, not part of the name, it sounds like you may want to be including directories based on a partial pattern. If anyone knows how to do regex type excludes/includes with wget, I'd like to learn that, I've looked for that for years. https://www.gnu.org/software/wget/manual/html_node/Directory_002dBased-Limits.html
Download only specific directories from FTP site using wget
1,427,300,495,000
I would like a simple (ideally one-liner, without separate script file) command to connect to an ftp server via anonymous login using my email address as the password. My attempt is based on the syntax as shown here of basically ftp username:password@hostname; however this does not work for me when the password itself, being an email address, has an @ sign.. I also tried to provide a netrc file as a heredoc, as so: ftp hostname -N <<< 'login anonymous password [email protected]' but this still prompts me for a password during the ftp login..
use lftp example: lftp -u user,pass ftp.example.com
Specify anonymous ftp password in ftp command
1,427,300,495,000
I succeeded in transferring all files from a folder via ftp from a remote server to a raspberry, but I would like to transfer only new ones. Below is the working script i have. #!/bin/bash -vx ftp -in IP_SERVER<<END_SCRIPT quote USER rem_user quote PASS rem_pass bin prompt:off cd /path_to_server_files lcd /path_to_local_files mget *.mp3 bye END_SCRIPT I have a company that provides background music to other companies. My method was leaving a computer in each one playing 24/7 or with other specific cron jobs, depending on the client. And the raspberry is a great way to do that instead a computer. The method i have right know that is working is a cron job per folder. Each folder has a type of music. So i will be putting different music from time to time in the server and the cron job will transfer those files once a week. It is set to transfer every mp3 file in that folder to the RPi. The thing is, it will transfer all the files there including the ones that were already there. If i put there, for example, 150 music files, it will take a long time transferring those, not to mention if it is done with all the folders since the RPi ARM is not that powerful. The solution would be not overwriting the files already there, just the new ones. Then after some time another cron job will delete all the files that have more than * days old. I searched but it seems ftp doesn't have an option like this yet. So I found the wget command which allows to transfer without overwriting but i couldn't make it transfer multiple files. I have been trying to convert the script above with the wget command without success. Can someone with experience in this matter help out? It could be a problem with http also. Thanks in advance. I have tried with wget command: * * * * * wget -r -l1 -N -A.mp3 'ftp://serverUser:Password@serverIP/path_to_server_files' /var/www/rd/musica/teste/ftp11.log 2>&1 Errors: ftp://serverUser:Password@serverIP/path_to_server_files: Bad Port Number /var/www/rd/musica/teste: Scheme Missing This is my attempt with rsync: The rsyncd.conf: (I am not sure if all credentials are right, so I'ĺl put every file in here so it can be corrected.) lock file = /var/run/rsync.lock lock file = /var/log/rsyncd.log pid file = /var/run/rsync.pid [documents] path = /var/www/rd/musica/teste comment = The documents folder of localusername uid = localusername gid = localusername read only = no list = yes auth users = serverusername secrets file =/etc/rsync.secrets hosts allow = serverIP/255.255.255.0 rsyncd.secrets localuser:password serveruser:password command to run rsync: rsync -rtv serverusername@serverIP::documents/path_to_server_files/*.mp3 /path_to_local_destination_folder It returns these errors: rsync: failed to connect to serverIP (serverIP): Connection refused (111) rsync error: error in socket IO (code 10) at clientserver.c(122) [Receiver=3.0.9]
SOLUTION - I got this script to work. thank you for all the support you gave, If i have enough time to be working on this i'll keep trying the other options and make them work as well. #!/usr/bin/python import os from ftplib import FTP local_path='/path_to_local_files/' os.chdir(local_path) ftp = FTP(host='server_name_or_IP',user='username', passwd='password') ftp.cwd('/path_to_local_files/') f_list = ftp.nlst() for f in f_list: if not f.endswith("mp3"): continue new_f_name = local_path + f if os.path.exists(new_f_name): continue print("Copying remote file <{0}>to local file <{1}>".format(f,new_f_name)) ftp.retrbinary('RETR '+ f, open(new_f_name,'wb').write) you may need to install this in order for the script to work: sudo apt-get install python-dev
Transferring mp3 files from a remote server via cronjob to the RPi without overwriting
1,427,300,495,000
Usually, when I drag a link to the terminal, the link target is pasted in the terminal. However, when I just dragged a FTP link from Chromium to my Terminal (first result on Google for "ftp pdf", ftp://ftp.ipswitch.com/ipswitch/manuals/ftpserv.pdf), a file download was immediately initiated (I knew because a desktop notification appeared). I tried to locate the downloaded file using find ~ -iname '*.pdf' (also in /tmp), but I didn't find any traces. When I tried to reproduce the problem by dragging the ftp-URL again, the download completed much faster than before, from which I infer that the file is still cached, somewhere. So, my question are: Why is the ftp resource being downloaded after I drag the link to my Terminal? (instead of being pasted as text, like http-URLs) Where can I find the downloaded file? (I want to delete it)
Why is the ftp resource being downloaded after I drag the link to my Terminal? (instead of being pasted as text, like http-URLs) The resource is not being "downloaded", but queried for its path and type. When dropping a file or URL on Konsole, it will try to convert it to a URL or path. Try for instance dropping a file:///tmp/ URL from Firefox to Konsole, it will instead show up as /tmp/. Another reason why this status call is needed is to check whether it is a directory or not (so you can drop a directory and select Change Directory To. For this status discovery, Konsole depends on various protocols to provide a KIO interface. Now usually you should get a menu with Copy Here, Move Here, Link Here and Paste Location options. However for some reason (presumably a race condition bug?), the menu is not always offered and a download is immediately started (relevant code is in TerminalDisplay::dropEvent). This was observed in Konsole 15.08.0 (Qt 5) on Arch Linux by the way. Where can I find the downloaded file? (I want to delete it) There is a good chance that the file was not actually saved and so you do not need to delete it. If you think it is still somewhere, look in /proc/$pid/fd/ for file descriptors of your process (or invoke lsof). (Have you also tried /var/tmp/ besides /tmp and ~?) In the next 15.xx version, a new option is available to disable this Drag 'n' Drop functionality and simply paste everything as text (bug 304290). You can find this by opening your Profile settings, tab Mouse, option Disable drag and drop menu for URLs and files.
Where to find the downloaded file after dragging a FTP URL to the terminal?
1,427,300,495,000
It was a wonder for me to see why the file owner changes when I log in the remote machine with different usernames. Please see: 1st I logged in a remote machine using username : peacenews. I entered a directory (pwd outputs / as the directory) When I run ls to see the files there, I get the below: (I am only showing the last few lines of the output) -rwx rwxrwx 1 peacenew 504 198311940 Oct 4 02:21 Rotary club ORC, Delhi .m2p drwxrwxrwx 2 peacenew 504 4096 Sep 19 23:09 Vizianagaram, AP -rwxrwxrwx 1 peacenew 504 296817474 Oct 3 10:30 dehradun-prem.VOB 226-Options: -l 226 18 matches total Then I logged out and reconnected as anonymous user. This time also I logged in the same directory (having the same filenames) See the output of ls please (last few lines only): -rwxrwxrwx 1 504 504 198311940 Oct 4 02:21 Rotary club ORC, Delhi .m2p drwxrwxrwx 2 504 504 4096 Sep 19 23:09 Vizianagaram, AP -rwxrwxrwx 1 504 504 296817474 Oct 3 10:30 dehradun-prem.VOB 226-Options: -l 226 18 matches total Have a note at the 3rd column and compare with the previous one. Group owner for the files are the same but file owner changes. How come? It's a wonder!!!
I have resolved the matter on my own. I did my research by doing ftp to my own server. The thing is that the file owner isn't changing. The 1st output shows the username and the 2nd shows the user ID for the same user. So simple but for confirming this I had to experiment with my own server.
ftp: in remote machine, file owner changes with the change in username used for logging in the remote machine
1,427,300,495,000
I'm using vsftpd. By default, when I create a user, they are jailed in their directory which is /home/user. I have enabled chroot_local_user=YES. On the other hand, I also wanted to create a shared directory for all the FTP users. so in a nutshell, they have their own directory and they have a shared group folder / +/home +user1 +shared_folder How can I give the users access to the shared directory over FTP?
Try to "mount --bind" the shared directory into the user's directory. mount --bind /home/actual_share/ /home/someguy/shared/ I assume you'll need to add group write to the "actual_share". Got the idea from this forum post.
Share a directory over FTP with chroot_local enabled
1,359,646,422,000
Folks over at SO couldn't answer this, so I am posting over here. I need to do an FTP over SSL to a FileZilla Server running on a Windows server from an AIX Unix clien. I have the Host name of the destination server, the user id and password, and an SSL certificate. I am not sure how to install that certificate in AIX. When I do an ftp command (using the below code) from Unix it does connect successfully to the Filezilla server and I am able to do get and or mget. But, I am not sure if that is happening over SSL since I haven't installed the SSL Certificate yet. Do I need to install the Certificate in the Unix box (AIX)? if yes, then how? (specific steps and commands) and how to utilize the installed SSL certificate to do FTP after the installation? $ ftp XXX.XXX.XXX.243 Connected to XXX.XXX.XXX.243. 220-FileZilla Server version 0.9.24 beta 220-written by Tim Kosse 220 Please visit http://sourceforge.net/projects/filezilla/ Name (XXX.XXX.XXX.243:littercat): joyride 331 SSL required Password: 230 Logged on ftp> Note: SSH (SFTP/SCP) is not an option and it has to be FTP over SSL/TLS only (From AIX UNIX to Windows FileZilla). AIX version 5.3. Third party tools cannot be used (eg. cURL etc.)
The standard AIX ftp client does not support SSL or TLS. I'd be very interested if you find a way to get this going without 3rd party tools. You can grab lftp from several sources ... we have used that in production successfully for a few years now on AIX 5.3. I've used the rpm available here lftp rpm for AIX, as well as compiling from source lftp download, although the latter can take a bit of extra work for things like GNUtls. Newer versions of AIX 6.1+ include a secure option for the FTP client to use TLS.
FTP over SSL in AIX (UNIX)
1,359,646,422,000
I recently installed a CentOS machine for a couple of game servers. I followed some tutorials on the web for that. But. I want to sftp to store data on the server but it is very slow, I read that ftp is much faster. So my question is: Can I change my SFTP to a FTP server and if so, how.
First you have to analyze why SFTP is "slow". Does the CPU-usage on the server or client raise to a very high level during transfer? Are there any lost packets on the network layer? Do your duplex settings on the network adapters match those of the network devices (switches, routers, whatever) match each other? Apart from that FTP uses a very different protocol, with no encryption and that can be difficult to get through firewalls, while SFTP operates over SSH and generally Just Works.
change sftp to ftp
1,359,646,422,000
I have the below script to pull the file from another server. #!/bin/bash PROC_DATE=`date "+%Y%m%d"` CURRENT_DATE=`date` LOG_FILE="/home/tdata/dw/da/common/AWS_{PROC_DATE}.log" AWS_CRED_FILE_NAME="/home/tdata/.aws/crnals" cp -p ${AWS_CRED_FILE_NAME} ${AWS_CRED_FILE_NAME}_${PROC_DATE} if [ $? -ne 0 ] ; then echo "Credentials backup Failed\n" > ${LOG_FILE} cat ${LOG_FILE} exit 1; else cd /home/tdata/.aws/ sshpass -f .OP004 sftp -o ConnectionAttempts=1 tdata@OP004 << ! cd /home/tdata/.aws/ get crnals bye ! if [ $? -ne 0 ] then echo "The credentials copy from OP004 to CP104 failed. Please check and re-run manually" > ${LOG_FILE} cat ${LOG_FILE} exit 1; else echo "The credentials copy job completed successfully on ${CURRENT_DATE}" > ${LOG_FILE} exit 0; fi fi But it is getting struck for a very long time just like below and not getting completed at all. $ sh -x AWS_CRED_COPY.sh ++ date +%Y%m%d + PROC_DATE=20221004 ++ date + CURRENT_DATE='Tue Oct 4 02:07:56 CDT 2022' + LOG_FILE=/home/tdata/dw/da/common/AWS_20221004.log + AWS_CRED_FILE_NAME=/home/tdata/.aws/crnals + cp -p /home/tdata/.aws/crnals /home/tdata/.aws/crnals_20221004 + '[' 0 -ne 0 ']' + cd /home/teradata/.aws/ + sshpass -f .OP004 sftp -o ConnectionAttempts=1 teradata@OP004 I even tried adding Timeoutparameter for 20 Secs, that is also not working. But when I run the SFTP block manually on CLI, it is working well and as expected. tdata@cp114 -bash /opt/tdata/dw/data/common $ sshpass -f .OP004 sftp -o ConnectionAttempts=1 teradata@OP004 << ! > cd /home/teradata/.aws/ > get crnals > bye > ! Connected to ODRP5004. sftp> cd /home/teradata/.aws/ sftp> get crnals Fetching /home/tdata/.aws/crnals to canals sftp> bye Any idea on how do we resolve this?
I successfully run a snippet like yours. For running the command directly: Removing the password file, lead to an indefinite halt in the terminal. Giving a wrong password to the file, timed out with Permission denied, please try again.. For running the script gave the same results as above. I guess you are missing the password file. Trivia: sshpass takes the string until first newline character.
SFTP job getting hung when running through shell script
1,359,646,422,000
I'm using mget in an ftp script and would like mget to skip getting files that already exist in my local directory. Is there such a way? If so, is there also a way for mget to proceed if the local and remote files are different? EDIT: Both are excellent answers, I went with the 2nd because it provided an example which I modified to: lftp $HOST -u $USER,$PASSWD -e 'mirror --verbose=3 --only-newer --include "my-extended-regex-expression-of-include-file-patterns" / transfer/'
There are many different ftp implementations. I don't know of any ftp program that has an mget command that checks local files before downloading. There are many programs that can download files over FTP and that do have what you want. lftp lftp ftp.example.com <<EOF mirror --only-missing --file=/path/to/*.ext . EOF Replace --only-missing by --only-newer to download only newer files. wget wget -nc 'ftp://ftp.example.com/path/to/*.ext' Add the option -N to download only newer files. Alternatively, and this is the best solution to do complex things, mount the remote filesystem and then use normal file manipulation tools. You can mount an FTP directory with CurlFtpFS. mkdir mnt curlftpfs ftp://ftp.example.com/ mnt rsync -a --ignore-existing mnt/*.ext . # replace --ignore-existing by -u to download only newer files fusermount -u mnt
ftp mget don't overwrite
1,359,646,422,000
Question When I start SSH server, my Debian automatically start the SFTP server as well - why is it design in such way? Environment: Linux 5.10.0-14-amd64 Debian 5.10.113-1 (2022-04-29) x86_64 GNU/Linux ssh.service - OpenBSD Secure Shell server Background Today I realized: when I want to handle http requests, I start a web server - Apache(2), Node.js, etc. when I want to handle SSH, I start an SSH server when I want to handle SFTP... Debian already started SFTP server for me So I researched, and according to this post 378313/default-sftp-server-in-debian-9-stretch, I found out SFTP is started as "part of (Open)SSH" which makes perfect sense but also feels strange for reasons such as separation of concerns. Unlike Windows, I have never felt Debian doing something unexpected or extra on my behalf. But today I felt it - after all I said systemctl restart ssh, not systemctl restart ssh-and-also-ftp (the latter command is made-up). As I am new to Unix/Linux and its philosophy, I would appreciate if there are any good explanations for this situation.
Although SFTP is not part of the extensible core SSH protocol, it is built-in to at least one of the the common SSH implementations (OpenSSH) and therefore can be considered to be a standard component. You can disable the functionality on the server if it's not required by changing /etc/ssh/sshd_config so that you remove the Subsystem line corresponding to the sftp-server. For example, this line defines an external sftp-server utility to handle the SFTP service: Subsystem sftp-server This line defines an internal implementation of the SFTP service: Subsystem internal-sftp Removing or commenting out the Subsystem line will disable the SFTP service entirely. # Subsystem … Remember that tools such as rsync (if it's installed) and versions of scp will still function, though, so disabling SFTP will not of itself prevent users from transferring files between client and server. (Older versions of scp will work independently of SFTP. Newer versions use SFTP but can be forced to use the older protocol with the -O flag.) There are also "trivial" solutions such as ssh remoteHost cat somefile > local_copy_of_somefile to consider.
Why do I not need to start an SFTP Server ( why does SSH automatically start SFTP )?
1,359,646,422,000
I'm accessing my localhost ftp server by Iceweasel or by the terminal (with the command ftp) while listening on the ftp port with sudo tcpdump -vv -A 'port 20' or sudo tcpdump -vv -A port ftp but nothing is printed although the connection is well established. What am I doing wrong?
You indicated you are connecting via localhost. You will probably need to specify the interface with -i lo (or lo0 on a Mac), so use: sudo tcpdump -i lo -vv -A port ftp or on a Mac: sudo tcpdump -i lo0 -vv -A port ftp The you should see the traffic. The reason: -i Listen on interface. If unspecified, tcpdump searches the system interface list for the lowest numbered, configured up interface (excluding loopback), which may turn out to be, for example, "eth0".
Why don't I see packets when I listen on my ftp port with tcpdump on localhost?
1,359,646,422,000
I think it was because I tried to install. Slackware txz 13 and mine is 14.1 64 bit, but I do not know where baicho package for Slack 14.1 can someone help me?
Compile from source (according to wiki https://wiki.filezilla-project.org/Client_Compile ): Install dependencies: wxWidgets GnuTLS libidn gettext (Compiletime only) libdbus (under Unix-like systems) Download source package: http://sourceforge.net/projects/filezilla/files/FileZilla_Client/3.7.4.1/FileZilla_3.7.4.1_src.tar.bz2/download Exact source archive: tar -xvf File-name.tar.bz2 Enter exacted directory and compile: ./configure make make install That's all.
FileZilla will not start on Slackware
1,359,646,422,000
Tried to install curlftpfs in debian 12 says that the package is missing. While I understand that the package is still not active developed I use often curlftpfs inside some virtual machines transfer to transfer between various fs eg. windows/linux VMs that pass files through filezilla server and so on. So to me would be still useful use it even in a non secure FTPS fashion of the protocol. Is there a way to install that package o a replacing package like that without break my debian 12 installation? Thanks
curlftpfs ships a single, simple binary package; you can safely install the Debian 11 version in Debian 12. Assuming you’re using amd64: wget http://deb.debian.org/debian/pool/main/c/curlftpfs/curlftpfs_0.9.2-9+b1_amd64.deb sudo apt install ./curlftpfs_0.9.2-9+b1_amd64.deb
Is curlftpfs missing in debian 12?
1,359,646,422,000
An article at this URI https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/security_guide/sect-security_guide-firewalls-iptables_and_connection_tracking RELATED — A packet that is requesting a new connection but is part of an existing connection. For example, FTP uses port 21 to establish a connection, but data is transferred on a different port (typically port 20). states that netfilter connection tracking mechanism is able to tag the ftp's out-of-band data-connection traffic to be in RELATED state with the ftp's control-connection. I have the following questions. 1) Can it do a similar tracking operation when data-connection is setup with non-default ports ? 2) If yes, Does it go through the FTP messages to find out the data-connection tuple (dst_ip, dst_prt, src_ip, src_prt) ? I know that is way too impractical to implement. So how does netfliter really achieves this ?
Does it go through the FTP messages to find out the data-connection tuple (dst_ip, dst_prt, src_ip, src_prt) ? I know that is way too impractical to implement. Yes, that is exactly what it does. Why do you think it's too impractical? You can look at the code yourself here: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/net/netfilter/nf_conntrack_ftp.c Edit: OK, after your later comments I think I now understand what you meant. The kernel would only need to analyze the beginning of the first few data packets of every TCP connection to see if it looks like a FTP control connection or not, and only mark the actual FTP control connections for further analysis. Only the connections that look like FTP would be monitored for data-connection tuples. But a few years ago, it turned out that such fully-automatic tracking could be abused for malicious purposes. So with modern kernels, you now need to explicitly set up iptables connection tracking helper rules for protocols that need them, and that means if you use a non-default destination port for the FTP connection, you'll need a custom rule for that. But now you can fully control which interfaces, ports and connection destinations/directions will get the tracking helpers and which will not. The connection tracking helper rule for FTP in regular should look like this: iptables -t raw -A PREROUTING -p tcp --dport 21 -j CT --helper ftp If you have a firewall that only accepts connections to specific inbound ports, you might also need a rule like this in your INPUT and/or FORWARD chain to accept the inbound active FTP connections: iptables -A FORWARD -m conntrack --ctstate RELATED -m helper --helper ftp -p tcp --dport 1024: -j ACCEPT For data connections of control connections using a non-default port, you'll need a slightly modified rule, e.g. to accept data connections belonging to a control connection in port 2121: iptables -A FORWARD -m conntrack --ctstate RELATED -m helper --helper ftp-2121 -p tcp --dport 1024: -j ACCEPT By the way, there are several connection tracking helper modules available: ftp for FTP protocol, obviously. irc for the Internet Relay Chat protocol. Port numbers will vary. netbios-ns which you should not need for anything any more, since the WannaCry worm proved the SMB 1.0 protocol (that was used with the old NetBIOS style Windows filesharing) has a fatal flaw. Standard port for this would be 137/UDP. snmp for the Simple Network Management Protocol, standard port 161/UDP. RAS and Q.931 for h.323 video-conferencing sub-protocols (the old Microsoft NetMeeting etc). Ports 1719/UDP and 1720/TCP respectively. sip for the SIP internet telephony protocol. Standard port 5060, both TCP and UDP supported. sane for the network protocol of the SANE scanner software, standard port 6566/TCP. pptp for the RFC2637 Point-to-Point Tunneling Protocol, a form of VPN. tftp if you need to pass TFTP connections across a NAT. amanda for the network protocol of Amanda backup software.
How is netfilter able to track the out-of-band data-connections of FTP to be in RELATED state with control-connection?
1,359,646,422,000
I'm trying to mirror a directory via FTP with wget. The command I'm using is wget -m ftp://user:[email protected]/foo/bar/ But, when I run it, I get the following: --2018-10-10 15:01:32-- ftp://user:*password*@192.168.1.1/foo/bar/ => ‘192.168.3.150/foo/bar/.listing’ Connecting to 192.168.1.1:21... connected. Logging in as user ... Logged in! ==> SYST ... done. ==> PWD ... done. ==> TYPE I ... done. ==> CWD (1) /usr/user/foo/bar ... No such directory ‘foo/bar’. I've searched the man pages, and googled, and I can't figure it out. How do I make wget actually download the directory "/foo/bar/", and not "/usr/user/foo/bar/"?
A similar question on stackoverflow (which involved java instead of wget, but really the underlying problem is with the URL syntax which is hopefully language-independent) was resolved by adding another slash and URL-encoding it, like this: wget -m ftp://user:[email protected]/%2Ffoo/bar/ It works for me even without encoding, just with an extra slash: wget -m ftp://user:[email protected]//foo/bar/ The first slash is thrown away (serving only as a separator between host and path), and the second slash actually counts as part of the path.
Using wget on a directory outside the user's home directory
1,359,646,422,000
I'm configuring an FTP server on an embedded Linux system. Iv'e set in my /etc/passwd a number of local users, for example: UserA:14:50:FTP User A:/somepath:/bin/sh UserB:1000:0:FTP User B:/somepath:/bin/sh Problem is that I don't now how to set write_enable (whether any FTP commands which change the file system are allowed or not) per user. I want that UserA will be able to write and UserB not. Example local.vsftpd.conf: anonymous_enable=NO local_enable=YES userlist_deny=YES userlist_enable=YES userlist_file=/UNI/System/Config/deny_vsftpd_user_list.conf listen=YES chroot_local_user=YES xferlog_std_format=NO xferlog_enable=YES vsftpd_log_file=/tmp/vsftpd.log allow_writeable_chroot=YES write_enable=YES
Found the answer in this link. Here an extract of my '/etc/vsftpd.conf': # This powerful option allows the override of any config option specified # in the manual page, on a per-user basis. Usage is simple, and is best # illustrated with an example. If you set user_config_dir to be /etc/vsftpd_user_conf # and then log on as the user "chris", then vsftpd will apply the settings # in the file /etc/vsftpd_user_conf/chris for the duration of the session. # Default: (none) user_config_dir=/etc/vsftpd/vsftpd-user-conf Each time a new ftp virtual user need a personal ftp directory, under the dir '/etc/vsftpd/vsftpd-user-conf' I create a file named as the username where I define the personal ftp directory and the auth on it (RO or RW). Example for user 'test' (file '/etc/vsftpd/vsftpd-user-conf/test'): # vsftpd per-user basis config file (override of any config option specified # in the vsftpd server config file) # # TEMPLATE # # User test - Description for user test # # Set local root local_root=/srv/vsftpd/test # Disable any form of FTP write command. # Allowed values: YES/NO write_enable=YES
FTP server (VSFTP) "write_enable=YES" configuration per user
1,359,646,422,000
I send a Windows file using FTP to a Unix system and got appended ^M wherever a new line was intended, and I just want to remove them. One method which I can opt is to run dos2unix command. Can anyone suggest another method like sed command to remove such patterns?
Windows line endings consist of the two-character sequence CR, LF. CR is the carriage return character, sometimes represented as \r, \015, ^M, etc. A Unix line ending is just the LF character. A way to convert Windows line endings to Unix line endings using only standard utilities present on all Unix variants is to use the tr utility. tr -d '\r' <thefile >thefile.new && mv thefile.new thefile If the file already has Unix line endings, its content won't be changed. If you have many files to transform in the current directory, you can use a loop. Assuming that you don't have any files whose name ends with .new: for x in *; do tr -d '\r' <"$x" >"$x.new" && mv "$x.new" "$x" done Under Linux (excluding some embedded Linux systems) or Cygwin, you can use sed. The -i option to edit a file in place is specific to these systems. The notation \r for a CR character is more widespread but not universal. sed -i -e 's/\r//g' thefile
Remove special pattern ^M from script which got appended after FTP from windows to Unix
1,359,646,422,000
I am using: SCO_SV scosysv 3.2 5.0.7 i386, and I am trying to download ssh and install it. But the only way I can do it, is by using FTP. I have tried the following: # ftp ftp> ftp2.sco.com ?Invalid command ftp> What am I supposed to do? I have never used FTP before in any Linux machine. I have Googled it, and I looked at every entry on the first page, and that is all talking about downloading packages from FTP sites—no use.
A very simple use of the ftp client would be to specify the server's hostname on the command line: ftp hostname. Then use ftp commands ls and cd [directory] to navigate in the server's directory structure and use get [file] to fetch the desired file. Notes: FTP servers usually allow login for anyone, provided you use the anonymous username. To connect to ftp2.sco.com specifically, you'll have to activate passive mode using -p option: ftp -p ftp2.sco.com.
How to Download from an FTP site
1,359,646,422,000
I've tried following numerous tutorials found on the net. The setup always starts out simple like sudo apt-get intstall vsftpd and then goes into edditing the /etc/vsftp.conf file. This is where the tutorials fail me becasue most of them either leave the setup for the users home directory, or starts getting in the chroot setup which appears more complicated than I need. Some tutorials attempt to explain how to set the defualt path by simply saying "add local_root=/var/www in your config file" which I added to the end of the file. This causes the prompt to get stuck after entering the username. The problem here is that I don't know if the config directives have specific ordering. And if there is, the tutorials don't go into where it needs to be placed. I just need a single signon that points to the my /var/www Thanks!
This is actually quite simple - all you need to do is change the home directory definition of that user's entry within /etc/passwd.
What is the easiest way to set up a FTP server single signon pointing to /var/www?
1,501,174,611,000
I can normaly mount/umount FTP as file system using following commands: └──> curlftpfs -o codepage=windows-1250 anonymous:[email protected] /home/marek/ftpfs └──> ls /home/marek/ftpfs/ 1 2 3 └──> fusermount -u /home/marek/ftpfs └──> ls /home/marek/ftpfs/ └──> But when I issue curlftpfs with strace then nothing is mounted and the process exits with status 1: └──> strace -f curlftpfs -o codepage=windows-1250 anonymous:[email protected] /home/marek/ftpfs └──> echo $0 1 └──> ls /home/marek/ftpfs/ └──> Last lines from strace (full output is here): [pid 9619] mprotect(0x7f08780b2000, 4096, PROT_READ) = 0 [pid 9619] mprotect(0x7f08782bd000, 4096, PROT_READ) = 0 [pid 9619] munmap(0x7f0878e8d000, 135950) = 0 [pid 9619] open("/etc/passwd", O_RDONLY|O_CLOEXEC) = 6 [pid 9619] lseek(6, 0, SEEK_CUR) = 0 [pid 9619] fstat(6, {st_mode=S_IFREG|0644, st_size=2290, ...}) = 0 [pid 9619] mmap(NULL, 2290, PROT_READ, MAP_SHARED, 6, 0) = 0x7f0878eae000 [pid 9619] lseek(6, 2290, SEEK_SET) = 2290 [pid 9619] munmap(0x7f0878eae000, 2290) = 0 [pid 9619] close(6) = 0 [pid 9619] getgid() = 1000 [pid 9619] getuid() = 1000 [pid 9619] openat(AT_FDCWD, ".", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 6 [pid 9619] getdents(6, /* 2 entries */, 32768) = 48 [pid 9619] getdents(6, /* 0 entries */, 32768) = 0 [pid 9619] close(6) = 0 [pid 9619] mount("curlftpfs#ftp://anonymous:[email protected]/", ".", "fuse", MS_NOSUID|MS_NODEV, "fd=3,rootmode=40000,user_id=1000"...) = -1 EPERM (Operation not permitted) [pid 9619] write(2, "fusermount: mount failed: Operat"..., 50fusermount: mount failed: Operation not permitted ) = 50 [pid 9619] close(3) = 0 [pid 9619] exit_group(1) = ? [pid 9618] <... recvmsg resumed> {msg_name(0)=NULL, msg_iov(1)=[{"", 1}], msg_controllen=0, msg_flags=0}, 0) = 0 [pid 9618] close(6) = 0 [pid 9618] wait4(9619, <unfinished ...> [pid 9619] +++ exited with 1 +++ <... wait4 resumed> NULL, 0, NULL) = 9619 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=9619, si_uid=1000, si_status=1, si_utime=0, si_stime=0} --- sendto(4, "QUIT\r\n", 6, MSG_NOSIGNAL, NULL, 0) = 6 poll([{fd=4, events=POLLIN|POLLPRI|POLLRDNORM|POLLRDBAND}], 1, 1000) = 1 ([{fd=4, revents=POLLIN|POLLRDNORM}]) recvfrom(4, "221 Bye\r\n", 16384, 0, NULL, NULL) = 9 close(4) = 0 close(3) = 0 exit_group(1) = ? +++ exited with 1 +++
I am not familiar with this executable, but my guess is that it needs to run with privilege (probably suid root or similar). strace -f cannot run such a process with privilege unless strace itself is run as root and you may need the -u option.
exit status of command is different when it is run via strace
1,501,174,611,000
I have a centos 6.5 installed on Virtual Box. I'm trying to configure a basic ftp server. Below are details of it. Installed vsftpd from DVD/Yum: Status -> Successful Disabled firewall for test purposes, chkconfig iptables off : successful service iptables stop : successful Also set selinux in disabled mode: successful Added a rule in VirtualBox for port forwarding over NAT: Rule Protocol Host Port Guest Port Ftp tcp. 2121. 21 Now when I try to connect with local user or anonymous, it gives a number of errors, every time a different error. I'm also adding log messages shown by Filezila on my Host machine. -> LocalUserLogFromClient Status: Disconnected from server Status: Resolving address of localhost Status: Connecting to [::1]:2121... Status: Connection attempt failed with "ECONNREFUSED - Connection refused by server", trying next address. Status: Connecting to 127.0.0.1:2121... Status: Connection established, waiting for welcome message... Response: 220 Welcome to C6G FTP service. Command: AUTH TLS Response: 530 Please login with USER and PASS. Command: AUTH SSL Response: 530 Please login with USER and PASS. Status: Insecure server, it does not support FTP over TLS. Command: USER FUser1 Response: 331 Please specify the password. Command: PASS ***** Error: Connection timed out after 20 seconds of inactivity Error: Could not connect to server -> AnonymousUserLog Status: Resolving address of localhost Status: Connecting to [::1]:2121... Status: Connection attempt failed with "ECONNREFUSED - Connection refused by server", trying next address. Status: Connecting to 127.0.0.1:2121... Status: Connection established, waiting for welcome message... Status: Insecure server, it does not support FTP over TLS. Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" Command: TYPE I Response: 200 Switching to Binary mode. Command: PASV Response: 227 Entering Passive Mode (10,0,2,15,88,204). Command: LIST Error: The data connection could not be established: 10065 Error: Connection timed out after 20 seconds of inactivity Error: Failed to retrieve directory listing
An FTP server behind a port forwarder can’t serve in the passive mode unless the software used has a special design to pass FTP data connections through. Cases are: The NAT software able to sniff for 227 Entering Passive Mode in FTP control connection, to do port forwarding accordingly and some mangling of FTP control data. The FTP server sending passive mode with visible (NAT) IP address and the NAT forwarding given range of ports. Thanks to @derobert for pointing out a flaw in my original reasoning. Linux kernel NAT is supplied with an (optional) FTP support. But if Linux is the guest, then its NAT has nothing to with the problem. The host system’s NAT must do NAT for the guest, indeed. Has the Windows’ NAT software in question such capability? Dunno, and it’s not a Linux question anyway. Recommended solution: replace NAT with bridging for the virtual machine.
FTP Troubleshooting
1,501,174,611,000
I've managed to run vsftpd with unencrypted FTP, Implicit SSL, and Explicit SSL. What I'm looking for is a way to run it with Explicit SSL, but have a separate port for SSL. For example: port 15000 for unencrypted, and port 15001 for SSL. This is because I want to enable LAN users to connect unencrypted but WAN users to connect only encrypted. I would use SSL on LAN too, but I'll be going Gigabit soon (laggard) and having a couple of users transfer files at speeds 70-100MB/sec is going to bring my server's CPU to its knees. What has worked so far is to run two instances of vsftpd with different configs. I was hoping for a more tidy solution.
The answers to this question at Server Fault suggest the only way to do this is to run two separate instances of vsftpd, each with one of the configurations you want.
vsftpd: use Explicit SSL in a different port than unencrypted FTP
1,501,174,611,000
Right now we have a script that uses the FTP mget command to download all the files in a specific location. After after verifying the files have successfully downloaded we run a ftp delete command to delete each file that was downloaded. We noticed that each ftp delete is creating a new connection and I was wondering if it is possible to delete each file in one connection? I have a .txt file with all the file names that need to be deleted, but the file type varies a great deal so it would be nice to be able to target each file individually.
You might use lftp instead of the regular ftp client. With lftp you can use mget -E /path/to/files which will delete the source files after succesful transfer. See http://lftp.yar.ru/lftp-man.html for the manual.
delete multiple remote files with the FTP command
1,501,174,611,000
I'm using GNU's Inetutils FTP and have gotten it all set up except it does not automatically start when I turn on my system. In order to get it to work I have to start the standalone using "ftpd -D". I've figured out that I have xinetd running and I believe I need to use that to automatically start the daemon. In the /etc/xinetd.d/ directory I've added a file named 'ftp'. Inside its contents are: service ftpd { socket_type = stream protocol = tcp wait = no user = root server = /usr/bin/ftpd instances = 20 } *Doing a whereis shows that ftpd resides in /usr/bin/ftpd* After adding it I reloaded the configuration and restarted the server. /etc/init.d/xinetd reload /etc/init.d/xinetd restart My xinetd.conf file is as followed: # Simple configuration file for xinetd # # Some defaults, and include /etc/xinetd.d/ defaults { instances = 60 log_type = SYSLOG authpriv log_on_success = HOST PID log_on_failure = HOST cps = 25 30 } includedir /etc/xinetd.d There was also an inetd.config file in my system so I added the following for good measure though it doesn't seem inetd is running. ftp stream tcp nowait root /usr/bin/ftpd in.ftpd -l -a When I try to connected to my ftp server I get the error "ECONNREFUSED - Connection refused by server". Does anyone have any idea why this isn't getting started automatically by xinetd? I got my information from this website: http://www.cyberciti.biz/faq/linux-how-do-i-configure-xinetd-service/
I figured it out. Incase anyone else has this problem in the future, I forgot to specify a port to use for the service, changed the service name to ftp and set disable to no. Here is my final service file: service ftp { port = 21 disable = no socket_type = stream protocol = tcp wait = no user = root server = /usr/bin/ftpd instances = 20 } To get the logging working I used the following command: /usr/sbin/xinetd -filelog /var/log/xinetd -f /etc/xinetd.conf
Starting FTP with xinetd
1,501,174,611,000
I had vsftpd set up such that I was able to upload files to a VPS I set up. The only problem is that I could not create directories. I set up vsftpd to disallow anonymous users, but allow virtual users to connect with their local credentials. At this point, the error message when I tried to create a folder changed from '550: Create directory failed' (I'm paraphrasing) to '550: Permission denied'. root owns the /var/www folder, and user with which I was authenticating had read and execute permissions but not write, so it makes sense that I wouldn't be able to create folders or files. At this point I tried using chown and chmod to recursively change the group ownership to a group that my user was in and give my user write permission. This seemed to work at first - in the SSH session, I was able to cd to /var/www and create a new directory. However, when I tried to log in with my ftp client, I was now denied access. What's even weirder is when I checked /var/log/vsftpd.log, I see the following lines: Mon Jan 5 00:03:25 2015 [pid 801] CONNECT: Client "73.53.82.111" Mon Jan 5 00:03:25 2015 [pid 800] [gradinafrica] OK LOGIN: Client "73.53.82.111" ...even though the login doesn't seem to work. What's going on? EDIT (more info): OS: Ubuntu 14.04 Architecture: Virtual private server (?) When I set up the server, I disallowed logging in as root (as recommended by multiple sources) and set up a different user - 'gradinafrica' - which I added to the sudo group. I'm attempting to use this account for ftp. I haven't worked with sftp at all. Here's the contents of vsftpd.conf (comments omitted): listen=YES anonymous_enable=NO local_enable=YES virtual_use_local_privs=YES anon_upload_enable=YES dirmessage_enable=YES use_localtime=YES xferlog_enable=YES connect_from_port_20=YES chroot_local_user=YES local_root=/var/www/ secure_chroot_dir=/var/run/vsftpd/empty pam_service_name=vsftpd rsa_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem rsa_private_key_file=/etc/ssl/private/ssl-cert-snakeoil.key
If you need to get the write permission for your user account in /var/www/ particular user want to be a member of apache or www-data group according to your operating system. write_enable=YES Only use this if you have added a user with web root directory as it home directory allow_writeable_chroot=YES
Controlling ftp access with vsftpd
1,501,174,611,000
I would like to make shortcuts for gvfs mounts (I prefer to mount from the shell). For this, it is quite necessary/convenient not to have to enter the password for the ftp:// and sftp:// connections . Currently the only way I know is: echo 12345 | gvfs-mount sftp://[email protected]/ Even though I can make the file's rights 700 (or even 100), I'm not sure if it's reasonably safe. Is it worse than .netrc items? I know about ssh-copy-id, but I can't access shell on most of my sftp servers. Is there a better (standard) alternative to my approach? What do nautilus/caja do when they store the passwords?
You could use something like expect to provide the credentials each time you want to connect. It's not super secure but gives you what you want. #!/usr/local/bin/expect -- set timeout -1 spawn gvfs-mount {args} expect "User" send "joe\n" expect "Password:" send "xxxxx\n" expect eof Source: gvfs-mount specify username password
gvfs-mount auto password
1,501,174,611,000
Every 5 minutes some new files are downloaded via lftp to a local directory. I need to upload to another ftp only the non existing files. My script so far is: #! /bin/bash today=$(date +%Y%m%d) today_files="rec."$today"_" programa_dir="/home/user/local-dir" # Download files, that do not exist in the local directory lftp <<EOF open -u user,pass ftp1 mget "$today_files*" -O $programa_dir bye EOF # Upload the files lftp <<EOF open -u user,pass ftp2 lcd $programa_dir mirror -R bye EOF The mirror -R command doesn't recognize that only few files do not exist on the remote directory of the second ftp. Is there a way to fix that? I need only to check the filename's , not the creation or the modification time of the files. For the second ftp I tried lftp <<EOF open -u user,pass ftp2 mput $programa_dir/* -O / bye EOF The result was the same - lftp upload all the files, no only the non existing.
I don't have access to LFTP at the moment, but I suspect you're looking for the --only-missing param, which is only usable with mirror. Try this: lftp <<<EOF open -u user,pass ftp2 mirror --reverse --only-newer $programa_dir/* / bye EOF
LFTP mirror upload only non existing files to remote directory
1,501,174,611,000
I installed vsftpd on Linux Mint, I changed some settings to the configuration file so it permits anonymous users, however I want to redirect them to a certain directory where they can upload files. I'm just confused among the settings...
Please make sure the following setting is configured in your /etc/vsftpd.conf file anon_root=/example/directory/ # Directory to be used for an anonymous login if you are trying to create a soft link to redirect to certain directory then note that a soft links cannot resolve unless where they link to is inside the chroot area.
Right settings of vsftpd?
1,501,174,611,000
Anonymous / FTP users are allowed to access the FTP. Suppose if there is a user named as JOE then how can I allow him to access the FTP?
Not sure why you chose vsftpd, as the documentation is notoriously lacking/distributed. However, to answer your question - the simplest method of allowing registered user access is through enabling local users: # Uncomment this to allow local users to log in. local_enable=YES There are other options, dependent on your needs, such as storing users in a database engine like MySQL. For more pertinent information, please check the following pages: vsftpd configuration - online man page viki (vsftpd community wiki) - local user configuration viki (vsftpd community wiki) - virtual user configuration (db) Assuming you meant you compiled vsftpd when you said "I made my own", then this information should apply. Let us know if this is an incorrect assumption or if the info provided doesn't assist.
How to allow a particular user to access the FTP?
1,501,174,611,000
I have installed ftp and sftp on my linux mint system, for wordpress file transfer I had to create new user account for ftp and sftp on my system during the installation process. Is it possible to run ftp and sftp from home user account, if yes, how to configure it. And if single user account access is possible- what is advisable to have - multiple/seperate user accounts for ftp/sftp OR a single account merged with home user should be used Tried searching for solutions on various forums, but did not get any clarity Adding for more clarity: For wordpress, i referred this link wordpress installation To upload wordpress files to github, i required to convert it to static site, so for using 'simply static' wordpress plugin, i required ftp to upload files from my local wordpress installation. Below are the links i used for ftp and sftp installation- I used these 2 links for ftp installation ftp install link 1 & its user creation snap ftp install link 2 & its user creation snap and for sftp sftp install link & its user creation snap in both ftp and sftp , had to create new users for using them on filezilla/ftp client So i want to clarify , if I can use my default system user for ftp/sftp access and how to configure it. And what is advisable to create new seperate users and the reason for it. I searched how and why for cant I use the system user for all these server process, but did not get any clarity from internet
The SFTP article you quoted is misleading at best, and inaccurate for the remainder. SFTP has nothing to do with vsftpd. You can use sftp with your own user account already: sftp you@yourhost If you have SFTP you almost certainly do not need to install vsftpd. At least, not unless you have a legacy file transfer application you need to support. In direct response, Is it possible to run ftp and sftp from home user account, if yes, how to configure it Yes. Undo everything in the SFTP document, including the creation of a user account. It was already installed and configured for that.
can we run ftp or sftp from single user account?
1,501,174,611,000
Apologies if this isn't the right forum but I need the advice of an FTP familiar engineer. For security reasons, FTP services are being shut down on our company's Solaris 5 server. The reason is that man in the middle attacks are easier under the old FTP protocol. We run perl scripts that FTP OUT to other services, pull information in which we then sanitize and parse. What we don't want to do is function as an FTP host because we have no information that is relevant to put out. The software is old, there is no dev/test environment and wholesale "shutting down FTP services" ( where we can't even FTP out ) would break our production environment. I'd prefer to keep running FTP out to other servers in the interim until we can convert all the scripts to SFTP but is this even possible? To be a client but not a host? I know this question is vague for this SE forum normal so feel free to close.
FTP service inbound is managed by an FTP server. FTP connections outbound are performed by a client. The two parts are independent of each other. I'm not running Solaris so I can't give you the specifics process names, but this Oracle documentation link explains how to enable and disable the FTP server: svcadm disable network/ftp # Stop and disable the FTP server You can verify that the service is no longer running by attempting to connect to it with the FTP client (and you can also confirm that you can still make connections outbound): ftp -n localhost
Solaris functioning as FTP client but not as a host. Possible?
1,501,174,611,000
How can I list all users and groups and change their permissions? I try to update Wordpress but my FTP user intranet_admin has not enough permissions and is unable to create a directory. Update WordPress Download the update from https://downloads.wordpress.org/release/en_US/wordpress-4.9.7.zip ... Unzip the updated version ... Directory could not be created. The installation was not successfull Edit: I found out that I can list all users with "cat /etc/passwd". I found this entry: intranet_admin:x:1002:1000::/srv/www/htdocs/wp-intranet:/bin/bash What do I have to change so that the user will have enough permissions for updating wordpress? OT: Wordpress Problem solved! I executed from within the Wordpress root directory: find -type d -exec chmod 755 {} \; find -type f -exec chmod 644 {} \; and then: chown -R wwwrun:ftp-users /path/to/my/wp-directory This solved the problem and I was able to upgrade!
The permission to create a particular directory is not tied to the user account as tightly as you seem to think it is. That is, it's not part of the account's entry in /etc/passwd. A user can create (or delete) a directory in another directory if that other directory is writable by the user or by a group that the user belongs to. This would also permit the user to create/delete files in that directory. Example: $ ls -ld . drwxr-xr-x 2 myself staff 512 May 4 16:29 . Here, only the user myself can create and delete files and directories while everyone is able to access the directory and list the directory content (the x and r permissions). $ ls -ld . drwxrwxr-x 2 myself staff 512 May 4 16:29 . In this case, the myself user as well as all users in the group staff can create and delete files and directories in the current directory.
Change permissions of users and groups? [closed]
1,501,174,611,000
I'm using proftpd on my server (ubuntu 16.04 x86_64). Default proftpd use standart 21 port. I can connect from my home notebook to ftp with active mode without problem. Now I stop proftpd, change port from 21 to 10021, start service again. And now I can not connect with active mode, only with passive mode. What's changed? Also I can not understand, why works active mode? I have internet access by router. I do not forwand any ports to my notebook at router. As I now, on connect, my notebook (ftp client) create connection from some port > 1023 to server port 21. My notebook send to server also second (data) port to server and server connect from own port 20 to me with this data port. But how second connection can established, if my ports are closed from wan?
Your firewall (router) has a connection tracking helper for FTP. When it sees an FTP control connection (which it recognizes by TCP destination port == 21), it watches the commands. When it sees your client send the PORT command, it rewrites it (to your external IP address, and maybe a different port) and keeps track of the expected connection from the FTP server. When that connection arrives, it's allowed through. When you changed the port, none of that happened, because 10021 isn't recognized as an FTP control connection. On Linux, at least, that feature is the nf_conntrack_ftp module, and you can set the ports option to include 10021 if desired. PS: A similar thing can be done with a firewall in front of the server, though in reverse: it's done on passive-mode transfers, instead of active-mode.
While stop working active mode FTP after change server port
1,501,174,611,000
I am looking to install ftp (client) on my RHEL server. I can't access the internet directly, so I need to d/l the package. I have downloaded ftp-0.17-17.2.x86_64 from redhat, and it is telling me: libreadline.so.4()(64bit) is needed by ftp-0.17-17.2.x86_64 when I try and install readline-devel 4, it tells me a newer version is alreadyt installed; and readline 4 tells me it can't install because of conflicts with readline-devel. find / -name libreadline* -print /lib64/libreadline.so.6.0 /lib64/libreadline.so.6 /usr/lib64/libreadline.so Can anyone help me to know what the next step is?
Try to download a more recent version of ftp, for instance ftp-0.17-54.el6.x86_64.rpm See link http://mirror.centos.org/centos/6/os/x86_64/Packages/ or http://mirror.centos.org/centos/6/os/x86_64/Packages/ftp-0.17-54.el6.x86_64.rpm directly. From the ldd output you can see that it is linked to libreadline.so.6: ldd /usr/bin/ftp linux-vdso.so.1 => (0x00007fffa67be000) libreadline.so.6 => /lib64/libreadline.so.6 (0x00007fe48362c000) libncurses.so.5 => /lib64/libncurses.so.5 (0x00007fe48340a000) libc.so.6 => /lib64/libc.so.6 (0x00007fe483075000) libtinfo.so.5 => /lib64/libtinfo.so.5 (0x00007fe482e54000) libdl.so.2 => /lib64/libdl.so.2 (0x00007fe482c50000) /lib64/ld-linux-x86-64.so.2 (0x00007fe48388b000)
Installing ftp without internet access on RHEL 6
1,501,174,611,000
I'm trying to compile php 5.2.x in debian gnu/linux: ./configure --with-ldap --enable-ftp --with-apxs2 --with-mcrypt --enable-bcmath --with-bz2 --enable-calendar --enable-dba=shared --enable-exif --with-gettext --enable-mbstring --with-mhash --with-readline --enable-shmop --enable-soap --enable-sockets --enable-sysvmsg --enable-wddx --enable-zip --with-zlib --with-xsl make works perfect, but i need curl: ./configure --with-ldap --enable-ftp --with-apxs2 --with-mcrypt --enable-bcmath --with-bz2 --enable-calendar --enable-dba=shared --enable-exif --with-gettext --enable-mbstring --with-mhash --with-readline --enable-shmop --enable-soap --enable-sockets --enable-sysvmsg --enable-wddx --enable-zip --with-zlib --with-xsl --with-curl make error: /usr/bin/ld: ext/curl/.libs/interface.o: undefined reference to symbol 'CRYPTO_set_id_callback@@OPENSSL_1.0.0' /usr/lib/x86_64-linux-gnu/libcrypto.so.1.0.0: error adding symbols: DSO missing from command line collect2: error: ld returned 1 exit status Makefile:241: recipe for target 'sapi/cli/php' failed dpkg -l | grep openssl ii libcurl4-openssl-dev:amd64 7.38.0-4+deb8u5 amd64 development files and documentation for libcurl (OpenSSL flavour) ii libgnutls-openssl27:amd64 3.3.8-6+deb8u3 amd64 GNU TLS library - OpenSSL wrapper ii openssl 1.0.1t-1+deb8u5 amd64 Secure Sockets Layer toolkit - cryptographic utility dpkg -l | grep curl ii curl 7.38.0-4+deb8u5 amd64 command line tool for transferring data with URL syntax ii libcurl3:amd64 7.38.0-4+deb8u5 amd64 easy-to-use client-side URL transfer library (OpenSSL flavour) ii libcurl3-gnutls:amd64 7.38.0-4+deb8u5 amd64 easy-to-use client-side URL transfer library (GnuTLS flavour) ii libcurl4-openssl-dev:amd64 7.38.0-4+deb8u5 amd64 development files and documentation for libcurl (OpenSSL flavour) UPDATE: the error is about FTP with openssl support: ./configure --with-openssl --enable-ftp make ext/openssl/openssl.o: In function `zm_startup_openssl': /usr/src/php-5.2.17/ext/openssl/openssl.c:681: undefined reference to `SSL_library_init' ... collect2: error: ld returned 1 exit status Makefile:228: recipe for target 'sapi/cli/php' failed make: *** [sapi/cli/php] Error 1
The problem was openssl, well, I installed openssl 0.9.8 from source: Move to /usr/src compile it, and install it without man pages due to an error: ./config --prefix=/usr/local/openssl --openssldir=/usr/local/openssl no-asm -fPIC make make install_sw Then i compile php with this options: ./configure --with-openssl=/usr/local/openssl --with-openssl-dir=/usr/local/openssl --with-curl --enable-ftp --with-ldap --with-apxs2 --enable-bcmath --with-bz2 --enable-calendar --enable-exif --enable-mbstring --with-mhash --enable-shmop --enable-soap --enable-sockets --enable-sysvmsg --enable-zip --with-zlib make UPDATE: works for 5.6.28 too
php: compiling with openssl, ftp, ldap, curl support in debian gnu/linux