date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,525,288,972,000
Context: I am using POSIX shared memory to provide a set of processes with a shared memory space. I have used this scheme for some time now in order to share data, and it's working okay. However, I recently ran into an odd problem with a certain class of programs. Problem: I wrote a program in which each process has t...
As far as I can see, four processes get spawned in quick succession, and each of them tries to do * sum += some_value; it is perfectly possible that they all see * sum as being zero before the addition. Let's use an abstract assembler syntax. The C statement *sum = *sum + local_sum is compiled into LOAD *sum into R0 ...
Processor not seeing changes to POSIX shared memory?
1,525,288,972,000
Using Manjaro / Arch linux, I wanted to install another browser. However, no matter whether I installed Opera or Chromium (via pacman) I always get an error when executing it (from both Application Launcher and shell). Running Chromium it from the shell I get: $ chromium ...
Looks like a reboot solved the issue. However I have no idea what happened - and why it only seems to happen to browsers.
Browser (Opera, Chromium...) start causing Permission denied (13) error for shared memory
1,525,288,972,000
I have written a "device driver" (see source code here: https://bitbucket.org/wothke/websid/src/master/raspi/websid_module/ ) that runs fine for most of the time (see https://www.youtube.com/watch?v=bE6nSTT_038 ) but which still seems to have the potential to randomly crash the device occasionally. The "device driver...
After having failed to improve the situation by adding "memory barriers" at every reasonable point in my code I finally found a workaround that works. The problem does not seem to be linked to the shared memory at all. Instead it seems to be triggered by the scheduler and adding calls to "schedule()" in my long runnin...
Am I making invalid assumptions with regard to my kernel module's shared memory?
1,525,288,972,000
Let's say, we are creating a shared memory using mmap(). Let's say the total memory size is 4096. If we use a fork() system call to create children, would the children use the same memory, or will need to have their own memory to work?
On fork() the memory space of the parent process is cloned into the child process. As an optimization, modern operating systems use COW (copy on write), so all private memory is shared with the child process until one of the processes performs a change. Then the affected memory pages get duplicated. The child process...
How does a process and its children use memory in case of mmap()?
1,525,288,972,000
Is it a true statement that, shared memory does not work between a host OS and guest OS, but a Unix Domain Socket (specifically udp) can communicate between the two? An in depth explanation would be appreciated, thanks!
In general Unix Domain Sockets cannot communicate between host OS and guest OS. Unix Domain Sockets are, like e.g. Named Pipes, bound to the OS kernel. If you open the same Unix Domain Socket file node in the host and the guest, you get two different virtual network connections. One in the host kernel and one in the g...
Unix Domain Socket with VM
1,525,288,972,000
I'm curious because today the only way I know how to give two different processes the same shared-memory is through a memory-mapped file, in other words, both processes open the same memory-mapped file and write/read to/from it. That has penalties / drawbacks as the operating system needs to swap between disk and memo...
Apologies in advance if that is a silly question, but is there such a thing as a pure shared memory between processes, not backed by a file. Not a silly question! There is, it's the default way of getting it; (SYSV) shmget is the function you use to get these shared memory buffers. You assign a string name to it, a ...
Is it possible for two processes to use the same shared-memory without resorting to a file to obtain it, be it a memory-mapped file or /dev/shm file?
1,525,288,972,000
I have a computing cluster of 44 cores and 256GB memory running Ubuntu and I'd like to limit the number of CPUs and memory used by certain users. Limiting memory usage would be more important. So for example, I'd like to say that user X should only be able to use 10 CPUs and 50GB memory. How can I achieve this?
From a quick google search you seem to need ulimit. See more on that through man limits.conf. The best way to limit resources is through VMs (XEN/KVM/OpenVZ), but I don't think it's what you asked for.
Limit memory usage of a user
1,525,288,972,000
I just took a look at the output of top and it showed me (amongst other processes) the following: As once can see I have ten processes consuming approx 10GB each so 100GB in total. The computer however has only 64GB of memory as can be seen in the second line from the top. Of which currently about 22GB are used. Now...
You can see that the SHR column is displaying the same amount of memory as RES - this means that practically 100% of that particular task's resident memory consists of shared memory segments. Even that is not giving you full insight though, as RES is just the amount of memory that is not paged out. To figure out what ...
determine the actual memory usage of several processes that share a large memory segment
1,525,288,972,000
I have a program that creates four shared memory objects. The memory creation routine calls shm_unlink() before attempting to create them, and the program calls another routine to delete them with shm_unlink() at the end of the run. Today I got "permission denied" on objects 2-4 (but not object 1) when attempting to ...
Root can delete shared memory (or other IPC items) owned by any user. If you need a pragmatic way to do this, do this as root. Otherwise, you will need to possibly alter the permissions on the created items, either as they are created or afterwards. All filesystem entries (including things that aren't files) use the...
Delete POSIX shared memory owned by different user
1,525,288,972,000
When I run ipcs -m, I can see a list of the shared memory segments on the system, like ------ Shared Memory Segments -------- key shmid owner perms bytes nattch status 0x00000000 0 user1 664 342110 0 0x00000000 32769 user1 664 28391740 5 0x0000...
There is no tool do this. Only ipcrm (for deleting presented shared memory objects), ipcmk (for creating shared memory objects) and ipcs (for showing existing shared memory objects) are present (I mean util-linux project). The kernel doesn't provide /proc interface for Sys V Shared Memory Objects instead of POSIX Shar...
Change ownership of shared memory
1,525,288,972,000
We are debugging a situation where the cached/shared memory increase and increase until the system reach OOM-killer. We have set shmax and shmall in sysctl.conf but without any visible effect. Do we need to enable something more for shmax/shmall to work? Or can some part of the system go beyond this limit, how hard is...
SHMAX and SHMALL won't constraint the size of your miscellaneous tmpfs. Since tmpfs lives completely in the page cache and on swap, all tmpfs pages will be shown as “Shmem” in /proc/meminfo and “Shared” in free(1). Check with the df utility how many of these filesystem are actually mounted on your system and eventua...
What shared memory is not controlled by SHMAX/SHMALL?
1,526,149,786,000
Our nodes are named node001 ... node0xx in our cluster. I wonder, is it possible to submit a job to a specific node using Slurm's sbatch command? If so, can someone post an example code for that?
I figured it out. You need to use -w node0xx or --nodelist=node0xx.
How to submit a job to a specific node using Slurm's sbatch command?
1,526,149,786,000
I submitted lots of SLURM job script with debug time limit (I forgot to change the time for actual run). Now they are all submitted at the same time, so they all start with job ID 197xxxxx. Now, I can do squeue -u $USER | grep 197 | awk '{print $1}' to print the job ID's I want to delete. But how do I use scancel...
squeue -u $USER | grep ^197 | awk '{print $1}' | xargs -n 1 scancel Check the documentation for xargs for details. If scancel accepts multiple job ids (it should), you may omit the -n 1 part.
Best way to cancel all the SLURM jobs from shell command output
1,526,149,786,000
When running a SLURM job using sbatch, slurm produces a standard output file which looks like slurm-102432.out (slurm-jobid.out). I would like to customise this to (yyyymmddhhmmss-jobid-jobname.txt). How do I go about doing this? Or more generally, how do I include computed variables in the sbatch argument -o? I have ...
Here is my take away from previous answers %j gives job id %x gives job name I don't know how to get the date in the desired format. Job ID kind of serves as unique identifier across runs and file modified date captures date for later analysis. My SBATCH magic looks like this: #SBATCH --output=R-%x.%j.out #SBATCH --...
SLURM: Custom standard output name
1,526,149,786,000
How can I programmatically access SLURM environmental variables, like MaxArraySize or MaxJobCount? I would like to partition my job arrays into chunks of the allowed maximum size. Can this information be queried with any of SLURM's commands? So far, I have failed to find relevant information on this over the net. Find...
$ scontrol show config | grep -E 'MaxArraySize|MaxJobCount' MaxArraySize = 1001 MaxJobCount = 1000000 Will that be enough for what you're wanting to do? To get only the value for e.g. MaxArraySize: $ scontrol show config | sed -n '/^MaxArraySize/s/.*= *//p' As a shell function: slurm_conf_valu...
How to check SLURM environmental variables programmatically?
1,526,149,786,000
I want to keep monitoring a specific job on a slurm worload like cluster. I tried to use the watch command and grep the specific id. If the job id is4138, I tried $> watch squeue -u mnyber004 | grep 4138 $> squeue -u mnyber004 | watch grep 4138 but they doesn't work. The second command works for the first few secon...
You have to quote the command watch 'squeue -u mnyber004 | grep 4138'
`watch` command with piping `|` [duplicate]
1,526,149,786,000
Slurm is workload manager. There are two kinds of modes to run job, interactive(srun) and batch mode(sbatch). When using interactive mode, one needs to leave the terminal open which may lead extra burden to the remote terminal(laptop). However, sbatch mode just submit the bash script(*.sh) and can close the remote t...
echo yes | your-program yes yes | your-program
Automatically input "yes" on the bash file [closed]
1,526,149,786,000
I am a beginner in the use of .sh scripts so please excuse my ignorance. This is my problem: To submit my jobs to our cluster the corresponding submit file has to contain a "slurm header" and looks something like this. #!/bin/sh # ########## Begin Slurm header ########## # #SBATCH --job-name=blabla # ########### End ...
I don't think you can. All lines starting with # are ignored by the shell, and the $1 and $2 are shell things. Many job managers, including slurm, have some commands that are written as shell comments, so ignored by the shell, but are read by the job manager. This is what your SBATCH line is: #SBATCH --job-name=blabla...
Passing Argument to Comment in .sh script
1,526,149,786,000
I am working on a cluster machine that uses the Slurm job manager. I just started a multithreaded code and I would like to check the core and thread usage for a given node ID. For example, scoreusage -N 92512 were "scoreusage" is the command that I am unsure of.
It's been a few years since I ran a slurm cluster, but squeue should give you what you want. Try: squeue --nodelist 92512 -o "%A %j %C %J" (that should give your jobid, jobname, cpus, and threads for your jobs on node 92512) BTW, unless you specifically only want details from one particular node, you might be better...
Check CPU/thread usage for a node in the Slurm job manager
1,526,149,786,000
My questions is similar to the watch question here but with a twist. I need to use quotes, which seem to be stripped by an aliased watch. I want run watch on a custom slurm squeue command: $alias squeue_personal='squeue -o "%.18i %.9P %.8j %.8u %.216t %.10M %.6D %R %V %S %Z"' $alias watch='watch ' NOTE:...
watch concatenates its command line arguments, joining them with spaces and passes the result as a string to sh -c. So watch ls -l "foo bar" becomes the same as watch ls -l foo bar, and you get a similar problem with squeue. You have two choices: Add explicit quotes for the shell that watch starts. As you actually di...
Watch-command-alias-expansion AND need to use quotes
1,526,149,786,000
In my opinion, comments are comments are comments. They should NEVER change program state. Sadly, the people over at SLURM disagree. SLURM requires the following syntax (or something similar) to exist at the front of a .sh file: #!/bin/bash #SBATCH --time=5:00:00 #SBATCH --ntasks=8 #SBATCH --mem-per-cpu=1024M My...
All the processing done by SLURM (by sbatch, specifically) is done before bash is invoked, so bash won't help you here. The script could be in any language, it wouldn't matter: the #SBATCH are only coincidentally bash comments, what matters is that they're sbatch directives. Options can be specified in the file so as ...
Command Line Macros in Comments using SLURM
1,526,149,786,000
In short: On a Slurm cluster, I need some computers to be available and responsive to their respective owners during work hours. Problem: I manage a small (but growing) heterogeneous cluster with around 10 nodes, where some of the nodes are not dedicated. These are desktop computers used by colleagues on the same netw...
After a couple of days I managed to answer my own question. In hindsight it was quite simple. Responsiveness: The slurmd daemon can be started with command line arguments, list them with slurmd -h. In particular , slurmd -n 19 sets the highest nice-value (and thus lowest priority) for the daemon and all its subprocess...
Slurm on desktop computers, how to prioritize the owner
1,526,149,786,000
I am using a shared SLURM cluster. I am trying to get the path of the bash script from inside the script itself. There is already an excellent thread here: https://stackoverflow.com/questions/59895/get-the-source-directory-of-a-bash-script-from-within-the-script-itself. Unfortunately, none of those solutions work for ...
The problem was resolved in the comments. To summarize: It turns out that I did not properly diagnose the initial problem. SLURM did not modify $BASH_SOURCE or $0. I assumed it simply executed my script, but it actually copied my script to a new location (/cm/local/apps/slurm/var/spool/jobXXXXXX/slurm_script). To get ...
restore $0 or $BASH_SOURCE after it is modified by the cluster
1,526,149,786,000
Suppose my super computer has the following NODELIST's with the included features: NODELIST FEATURES NodeA (none) NodeB specialfeature and I am trying to benchmark performance using or not using the specialfeature feature. Measuring the performance of a run using specialfeature is easy. I simpl...
There is currently (as of version 15.8) no way to negate a feature in such way. The only way is to define a complementary feature in the following way: NODELIST FEATURES NodeA nospecialfeature NodeB specialfeature and then submit a job with --constraint=specialfeatures, and another one with --...
Using SLURM without a feature
1,526,149,786,000
The question is related to the article Introducing SLURM by Princeton Research Computing. For instance, #SBATCH --job-name=slurm-test # create a short name for your job SBATCH after the first # will be executed, and create after the second # will be regarded as a comment. Is this because there is a space in the se...
# is for comment in bash, so you should not run my_job.slurm with ./my_job.slurm or bash ./my_job.slurm because all things after # will be ignored. But when you run it with sbatch, it will recognize comments beginning with SBATCH as parameters. https://support.ceci-hpc.be/doc/_contents/QuickStart/SubmittingJobs/SlurmT...
"#" (comments) is SLURM job submissions
1,526,149,786,000
I am able to cancel slurm jobs by typing something like the following: $ scancel 66421802_[11-20] On bash, this works fine. However, on zsh I get the following error: $ scancel 66421802_[11-20] zsh: no matches found: 66421802_[10-20] How can I cancel a job range when using zsh?
[] are special to ZSH, though can be turned off by way of the noglob precommand modifier, so maybe the alias alias scancel='noglob scancel' will do the trick, and also for any other commands that take [] as inputs.
Cancel slurm job range in zsh
1,526,149,786,000
My question is related to a python error, but I suspect that it is more a Linux question than a python one. Thus I post it first here. I am running a python script which does a calculation and then produces a plot and saves it in a PDF file. The script runs through on my local machine (Mac OS), but when I run it on th...
This is an error with the python code using a library called “tk”. That’s a library usually used for showing a GUI so it expects to be able to access your display (xserver or similar). If you are running your code on a “headless” server then this just won’t work because there’s no monitor and your session can’t talk ...
Python error only when I run script on Linux cluster: _tkinter.TclError: no display name and no $DISPLAY environment variable
1,526,149,786,000
To build a piece of software I normally do rpmbuild -ta slurm*.tar.bz2 However I now need to configure the software with the option --with-pmix=/home/user/git/pmix/install/2.1 Is this possible using rpmbuild or do I need to go through the standard configure/make/make install proceedure?
What you can do is to create SPEC file and make rpmbuild to use it. In this file you can incorporate different parameters in build process. You can check here for example usage of SPEC file In a shell prompt, go into the buildroot and create a new spec file for your package. Open the spec file in a text editor. Th...
Passing configure option to rpmbuild?
1,526,149,786,000
I have a large number of scripts that can be run as separate jobs on a computing cluster, which uses slurm. I want to select some of them, based on the contents, to submit. It's easy to identify the filenames of the jobs I want using grep, but I'm struggling to pipe those and submit them. I thought that I could do som...
grep -l 'pattern' script_folder/* | xargs -n 1 sbatch xargs will by default read as much as can fit on one command line before executing the given utility with all the things that it has read. With -n 1 you limit the number of items that it passes to the utility to a single item per invocation.
Submitting list of jobs to slurm
1,526,149,786,000
So, I am by no means a sysadmin but I need to use an existing SLURM installation to launch a sizable amount of jobs (around 5000). The cluster is composed of 1 node with 10 GPUs (with 8GB of memory each) and 56 CPUs. Every job is a batch script that I run with sbatch <file> and then I use sview to see what's going on ...
You should use "Sharding" GRES (gres:shard) instead of gres:GPU, available in 22.05 or newer. https://slurm.schedmd.com/gres.html#Sharding It allows different jobs to share a GPU -- just like oversubscribed Cores and RAM resources. The conventional gres:gpu exclusively allocates a GPU to jobs no matter how much memory...
Running multiple SLURM jobs on the same GPU
1,526,149,786,000
How can I determine the optimum/maximum number of CPUs per task when running a job? Is there a way to display the total available memory on a given CPU as well?
You can use sinfo to find maximum CPU/memory per node. To quote from here: $ sinfo -o "%15N %10c %10m %25f %10G" NODELIST CPUS MEMORY FEATURES GRES mback[01-02] 8 31860+ Opteron,875,InfiniBand (null) mback[03-04] 4 31482+ Opteron,852,In...
SLURM: How to determine maximum --cpus-per-task and --mem-per-cpu?
1,526,149,786,000
First things first: no knowledge of either slurm or Infiniband is required - this is a purely text processing problem. Second - I'm aware of ib2slurm - the code is somehow broken and quite possibly outdated - it core dumps each time it runs regardless of the existence or format of a map file. I can reduce the output o...
Q1 This awk command extract a sorted list of unique computer names from the file, assuming: The source file is much longer, having a block of lines for each switch. An script to get a whole switch block (assuming the switch line is always the first line of a continuous set of lines for each switch) sorted and removing...
Text processing - Building a slurm topology.conf file from ibnetdiscover output
1,526,149,786,000
I am working on a SLURM cluster and there is a command to list all loaded software modules. I want to process the output and i.e. grep it for a certain word. However if I try to use the pipe I get unexpected output which I don't understand. $ module list Currently Loaded Modules: 1) miniconda3-4.8.2-gcc-8.3.1-altn3...
It seems like the command send the output to STDERR instead of STDIN. And because the terminal display both of them you get the things on this way. To send STDERR to STDIN and be able to filter the command output you can use this way: module list 2>&1 | grep conda
cant pipe SLURM `module list`command
1,526,149,786,000
Hi there I'm trying to download a large number of files at once; 279 to be precise. These are large BAM (~90GB) each. The cluster where I'm working has several nodes and fortunately I can allocate multiple instances at once. Given this situation, I would like to know whether I can use wget from a batch file (see examp...
Expand the command into multiple wget commands so you can send them to SLURM as a list: while IFS= read -r url; do printf 'wget "%s"\n' "$url" done < sgdp-download-list.txt > wget.sh Or, if your sgdp-download-list.txt is just a list of wget command missing the wget at the beginning (which is what your example sugg...
wget — download multiple files over multiple nodes on a cluster
1,436,277,312,000
How to let the Firewall of RHEL7 the SNMP connection passing? When I did this command on the computer: systemctl stop firewalld All the SNMP packet are passing well. When I restarted firewalld all the packet arre blocked. I tried several connfigruation with the firewall running of course, like: iptables -A INPUT -p ...
The correct way to do this is to add a profile for SNMP to firewalld. Using UDP 161 not TCP vim /etc/firewalld/services/snmp.xml <?xml version="1.0" encoding="utf-8"?> <service> <short>SNMP</short> <description>SNMP protocol</description> <port protocol="udp" port="161"/> </service> Then you should reload your ...
How to let the Firewall of RHEL7 the SNMP connection passing?
1,436,277,312,000
Nmap scanning network for SNMP enabled devices: sudo nmap -sU -p 161 --script default,snmp-sysdescr 26.14.32.120/24 I'm trying figure out how make that nmap return only devices that have specific entries in snmp-sysdescr object: snmp-sysdescr: "Target device name" Is that possible?
Nmap doesn't contain much in the way of output filtering options: --open will limit output to hosts containing open ports (any open ports). -v0 will prevent any output to the screen. Instead, the best way to accomplish this is to save the XML output of the scan (using the -oX or -oA output options), which will contain...
Nmap scan for SNMP enabled devices
1,436,277,312,000
We are trying to monitor our servers mainly with SNMP. Due to performance reasons we are changing this from single requests to snmp-bulk-requests (as allowed in SNMP v2c). In theory (at least to my knowledge) it should be possible to request several branches/values in a single bulk-request, so the number of tcp-sessio...
The command snmpbulkget does allow you to specify arbitrary, non-contiguous OID requests. Getting the non-repeaters and max-repetitions right may require some experimenting. There is a good example here: http://docstore.mik.ua/orelly/networking_2ndEd/snmp/ch02_06.htm
High-level command to request several branches of snmp in one tcp-session?
1,436,277,312,000
I installed snmp on CentOS 7.2, like so: yum -y install net-snmp net-snmp-utils I made a backup of my snmpd.conf file: cp /etc/snmp/snmpd.conf /etc/snmp/snmpd.conf.orig then I cleared the text, with this: echo "" > /etc/snmp/snmpd.conf and added to the snmpd.conf, the following: rocommunity "#random$" monitoring_...
The com2sec security model is not mandatory anymore. In snmpd.conf it should be enough to do: rocommunity "#randomsometinh$" 2.2.2.2 where 2.2.2.2 is the monitoring IP address allowed to connect. I often prefer to assign a single IP, than giving access to a whole /24. So this configuration means the SNMP service wil...
How to properly configure snmpd?
1,436,277,312,000
I have a CentOS server (7.2). I am trying to configure this as a SNMP trap receiver. In my snmptrapd configuration, I am calling a very basic shell script just to identify if the trap was received: [root@centos-Main snmp]# cat /etc/snmp/snmptrapd.conf authCommunity log,execute,net public traphandle default /etc/snm...
You've set snmptrapd to accept traps with community name public only: [root@centos-Main snmp]# cat /etc/snmp/snmptrapd.conf authCommunity log,execute,net public But the trap from the Juniper device uses the community name VINOD instead: [root@centos-Main snmp]# tcpdump -i enp0s3 port 162 tcpdump: verbose output sup...
How to configure snmptrapd to process incoming traps from a Juniper device?
1,436,277,312,000
I have a Windows 2003 server and need to poll the DHCP lease information from it with a perl script that is running on a Ubuntu server. Then I need to analyze & store the information in a mysql database. Is there a way to query the leases from a perl script? I can figure out how to process the info after I get it. Tha...
You can perhaps use SNMP, provided SNMP is enabled/allowed for DHCP service on Windows server. Using SNMP queries, one can build a statistics of the lease information from time to time remotely from the DHCP service. $snmp_address = "1.3.6.1.4.1.311.1.3.2.1.1.1"; $getsubnet = "snmpgetnext -v2c -c public -Oqv win_dhcp...
Query DHCP server leases from Perl script
1,436,277,312,000
I installed a snmpd into a CentOS 7 minimal installation for system parameters search, for instance: snmpget -v 2c -c public 127.0.0.1 .1.3.6.1.2.1.2.2.1.2 for the above command I get the following result: IF-MIB::ifDescr = No Such Object available on this agent at this OID when i execute: snmpwalk -v 2c -c public...
The SNMP daemon upon installation in CentOS is configured by default to answer to queries of a restricted MIB tree view using the "public" community for security reasons. As configured by default, the default "public" MIB (sub)tree allowed views are only .1.3.6.1.2.1.1 and .1.3.6.1.2.1.25.1.1 ; if you look closely th...
SNMP - No Such Object available on this agent at this OID
1,436,277,312,000
I'm using NET-SNMP 5.7.3 on Freebsd 12.1. I want to chane engineID with snmpset command. snmpd.conf rwcommunity private I enter this command: snmpset -v 2c -c private localhost e x 800000020109840301 The error is: Error in packet. Reason: notWritable (That object does not support modification) Failed object: SNMPv2-...
The answer is: snmpd.conf engineID a For test: snmpget -v 2c -c public localhost .1.3.6.1.6.3.10.2.1.1.0 SNMP-FRAMEWORK-MIB::snmpEngineID.0 = Hex-STRING: 80 00 1F 88 04 61 Every enginID begins with 080001F. It can't change with snmpset. it should set in config file.
How can I change the net-snmp engineID
1,436,277,312,000
Where in the SNMP OID tree does snmpwalk start if no OID is specified, i.e. snmpwalk is started like snmpwalk -v 2c -c public host? From .1.3.6.1.2.1?
Yes, from doing a network capture, it would seem so: SNMP 84 get-next-request 1.3.6.1.2.1 Which is: $ MIBS=+all snmptranslate 1.3.6.1.2.1 SNMPv2-SMI::mib-2 $ MIBS=+all snmptranslate -Of 1.3.6.1.2.1 .iso.org.dod.internet.mgmt.mib-2 Confirmed by reading the source: oid objid_mib[] = { 1, 3, 6, 1, 2, 1 }; [...
Where does snmpwalk start if no OID is specified?
1,436,277,312,000
I have a bunch of windows servers configured with the windows SNMP agent. Each server has four IP addresses and SNMP listens on all of them. There is something very odd with my monitoring server (which is Centos 5.5 32 bit with net-snmp 5.3.2.2). If I have iptables turned off then I have no problems performing snmp q...
Could you delete the following rules: -A OUTPUT -p udp -s 0/0 --sport 1024:65535 -d 0/0 --dport 161:162 -m state --state NEW,ESTABLISHED -j ACCEPT -A INPUT -p udp -s 0/0 --sport 161:162 -d 0/0 --dport 1024:65535 -m state --state ESTABLISHED -j ACCEPT And substitute this one, just below -A RH-Firewall-1-INPUT -p udp -...
Unable to make outbound SNMP connections when IPTables is enabled
1,436,277,312,000
I was looking at this link here: http://www.debianadmin.com/linux-snmp-oids-for-cpumemory-and-disk-statistics.html and noticed that the OIDs are the same ones I see for the same stats for our appliance. Is this some kind of standard with SNMP maybe an RFC or something? Does anyone know where I can find the list that t...
The list you're looking for is most probably at http://www.oid-info.com/ Yes, this is some kind of standard: OIDs are objects in the MIB, the global root MIB was defined in RFC 1155. It has since been extended, the SNMP MIB is RFC 1157.
Where do I find the OID descriptions for SNMPv2 in Linux?
1,436,277,312,000
Is it possible to get Net-SNMP running over TCP instead of UDP? The daemon program can be configured at the termiinnal to listen for TCP connections with: snmpd tcp:1161 However, there are no flags for the snmpget to use TCP?
I think snmpget also supports that. There are some common features described on the snmpcmd(1M) man page that they don't bother repeating on all the individual command pages. Agent Specification The agent specification (see SYNOPSIS) takes the form: [transport-specifier:]transport-address At its sim...
Net-SNMP over TCP?
1,436,277,312,000
I have IPv6 only (not dual stack) system I wondering how to - send snmptrap from this system? - configure snmpd to be able to access it? I mean, is snmp is ready to use in IPv6 only environment?
According to you have to specify udp6. Sending: trap2sink udp6:[::1]:162 in snmpd.conf This will send to localhost IPv6. Receiving: snmptrapd udp6:162
snmp/snmptrap support of ipv6
1,436,277,312,000
I installed at fedora snmpd package... > view systemview included > .1.3.6.1.2.1.1 view systemview > included .1.3.6.1.2.1.25.1.1 view > all included .1 80 > #### > # Finally, grant the group read-only access to the systemview view. > > # group context sec.model sec.level ...
The default fedora config is designed to only let you see the system group for security purposes. You need to replace the config with a better set that lets you access everything on the device. Running snmpconf -g basic_setup can help you with getting started. Or, you can replace the file with the following snippet...
SNMPD only system group available!
1,436,277,312,000
I wanted to know what should I do to restore the configuration files if I've modified or accidentally deleted a file. In my case, I'm talking about /etc/snmp/snmpd.conf, what command should I use to reinstall it?
For restoring the configuration files you can use: sudo apt-get -o Dpkg::Options::="--force-confmiss" install --reinstall packagename So in this case the command (for snmpd) would be: sudo apt-get -o Dpkg::Options::="--force-confmiss" install --reinstall snmpd Credits to this site
How to restore configurations files ? (SNMP)
1,436,277,312,000
I want to write a bash script to get some information of Switches through snmpbulkwalk. I would like to use the same script in Linux and OSX environments, so I want to know if there is a way to do a compatible version that identifies the current OS, get the needed SNMP packages for each one and run a bunch of commands...
As far as I know, uname will display the generic name of the operating system. My roommate has the latest (I think) version of OSX, and it displays Darwin when it runs. If you'd like more of an output, uname -a will give you the kernel version, OS version, and a bunch of other information, in addition to the generic n...
How to find out if the script is running in Linux or OSX
1,436,277,312,000
During the latest security upgrades, snmpd was upgraded to 5.7.3. Before finishing the procedure, apt-get upgrade starting giving the error: Starting SNMP services::Bad user id: snmp snmpd is also not running. What is happening?
Looking at the post-inst scripts of snmpd, it seems the default Debian user and group of the snmpd package, changed from snmp to Debian-snmp. To correct it it was necessary to edit /etc/default/snmpd and change the following line from: SNMPDOPTS='-Lf /dev/null -u snmp -g snmp -I -smux -p /var/run/snmpd.pid' to: SNMPD...
Debian stretch: upgrade of `snmpd` giving an error
1,436,277,312,000
I have already installed nagios-plugins-contrib: sudo apt update sudo apt install nagios-plugins-contrib However, there are no any cpu / hdd management plugin. I am using a Debian VM just in case. How can I get this plugin (if it exists of course)?
The package monitoring-plugins-basic provide 2 plugins , the check_disk to check the disk usage and the check_load to check the cpu: apt install monitoring-plugins-basic see: /usr/lib/nagios/plugins/check_disk --help /usr/lib/nagios/plugins/check_load --help Disk Space Checks Load Checks
Where can I get Nagios plugin for cpu/hdd monitoring?
1,436,277,312,000
As I understand, SNMP Management Information Base databases are used by Network Management Stations to translate data from SNMP agents into understandable form. For example in case of sysUpTimeInstance: $ snmpwalk -v 2c -c public 10.10.10.1 sysUpTimeInstance DISMAN-EVENT-MIB::sysUpTimeInstance = Timeticks: (28267576) ...
It can be very tedious to chain it back manually. There are tools such as snmptranslate to do this sort of thing for you. Try snmptranslate -M /path/to/mibs -m ALL -Pu -Tso|grep -B1 sysUpTimeInstance. To see the full details for sysUpTime, use snmptranslate -Td -OS .iso.org.dod.internet.mgmt.mib-2.system.sysUpTime Thi...
How to understand SNMP MIB?
1,436,277,312,000
I am setting snmpd and try to check by check_snmp. snmpwalk -c public -v 2c localhost iso.3.6.1.2.1.1.1.0 = STRING: "Linux ik1-325-22819 4.15.0-55-generic #60-Ubuntu SMP Tue Jul 2 18:22:20 UTC 2019 x86_64" iso.3.6.1.2.1.1.2.0 = OID: iso.3.6.1.4.1.8072.3.2.10 iso.3.6.1.2.1.1.3.0 = Timeticks: (45994) 0:07:39.94 iso.3.6....
OIDs are object identifiers. In numerical form, they are represented as strings of numbers separated by dots. They also have a symbolic form, where the numbers are mapped to keywords according to certain definitions. The OID for iso is just 1, since it identifies the first main branch off the root of the OID tree str...
What is OID,MIB? Check transfer amount by check_snmp
1,436,277,312,000
I'm on Freebsd 12. I installed NET-SNMP version 5.7.3 on my system. The problem is in sending traps. For example the link up/down does not sending trap. The config files are: snmpd.conf view V included .1 view V included .1.3.6.1.2.1.1 view V included .1.3.6.1.2.1.25.1 view V included .1.3.6.1.6.3...
The answer is: Adding two commands in snmpd.conf monitor -r 1s -e linkUpTrap "Generate linkUp" ifOperStatus != 2 monitor -r 1s -e linkDownTrap "Generate linkDown" ifOperS
How can I send snmp trap from freebsd
1,436,277,312,000
I am packaging an RPM for RHEL6, built from net-snmp-5.7.2.tar.gz. I see that the file /etc/rc.d/init.d/snmpd gets created and packaged but I do not see the init file for /etc/rc.d/init.d/snmptrapd. Is snmptrapd depreciated? I did I forget pass the right switch to ./configure ? Thanks
I found the snmptrapd init file in the dist directory. It is called snmptrapd-init.d
Why doesn't build of net-snmp 5.7.2. provide /etc/rc.d/init.d/snmptrapd?
1,436,277,312,000
I want to use snmpv3 on AIX, the client is Linux which use snmpwalk command On AIX side I first create the hash of the password pwtokey -p HMAC-SHA -u auth mypass 192.178.0.37 the command return this line Display of 20 byte HMAC-SHA localized authKey: 18de41acdd2c8f0a1cb24f875g611198ea23e990 Then I edit /etc/snmpd...
Solution found. SNMPv3 on AIX require snmp.crypto and is not installed on my system lslpp -cl snmp.crypto lslpp: Fileset snmp.crypto not installed.
SNMPv3 on AIX from Linux shows authentication failure
1,436,277,312,000
I have three devices (A,B,C) in my LAN all running snmpd. Apart from the community string they all have the same snmpd configuration. They can all run snmpwalk to the other devices except when trying to connect to Device C. Device C when calling itself, either through 127.0.0.1 or by it's <LAN-IP> address also works. ...
Urghh. firewalld was installed and enabled by default on Fedora 33. Running nmap to device C was the pointer that I needed to see something was up. So either disable the firewall if you're on an internal network. Or setup some rules for firewalld to play with.
snmpwalk from remote results in timeout
1,436,277,312,000
My question is about zabbix traps with SNMPv3 and snmptrapd service using zabbix_trap_receiver.pl. I have a switch on which I enabled SNMPv3 only, so the switch has no SNMPv1/2c rw or ro communities configured, and I was able to add it as a host on zabbix after a long journey modifying existing SNMPv2 templates. In za...
The way snmptrapd daemon works with traps doesn't allow receiving any SNMPv3 traps without specifying the EngineID of the sender device i.e. if you just do the following in /etc/snmp/snmptrapd.conf: createUser snmpv3USER SHA auth_pass AES priv_pass authUser log,execute snmpv3USER perl do "/usr/bin/zabbix_trap_receiver...
SNMPv3 traps in Zabbix
1,436,277,312,000
I have a Ubuntu server which is collecting incoming SNMP traps. Currently these traps are handled and logged using a PHP script. file /etc/snmp/snmptrapd.conf traphandle default /home/svr/00-VHOSTS/nagios/scripts/snmpTrap.php This script is quite long and it contains many database operations. Usually the server recei...
While your question actually confused me, I believe you should move away from using a php script to handle the snmptrapd service. That file (/etc/snmp/snmptrapd.conf), is used by the snmptrapd service which can be enabled at whatever run level you wish (3, 4, 5) and can be configured to log traps to MySQL, so there is...
Pass SNMP trap packet to a php daemon on Ubuntu
1,436,277,312,000
Trying to setup memory usage monitoring for Nagios using the check_snmp_mem.pl from Nagios SNMP plugin. I could not even get it working from the command line, I mean I go to /usr/lib/nagios/plugins and run the script, it gets a "No response from remote host" error. [root@nagios plugins]# ./check_snmp_mem.pl -H rhel01 ...
With help from another colleague, we worked out why it didn't work. 3 things: First, we have agentaddress tcp:x.x.x.x:161 in snmpd.conf, just deleted the line Second, iptables is blocking udp port 161, added rules to allow udp port 161 Third, something wrong with the script as you can see the error message about line...
No response from remote host for Nagios check_snmp_mem.pl plugin
1,436,277,312,000
I have trying to set up MRTG on my server, following this guide: https://help.ubuntu.com/community/MRTG I followed it as far as, cfgmaker <snmp_community_string>@<ip_address_of_device_to_be_monitored> > /etc/mrtg.cfg at this point I have no idea what my snmp community string is, I've looked around to try to find out, ...
What kind of SNMP-capable device are you going to monitor (with IP address my_ip_address)? The SNMP "community string" is kind of like a password. An SNMP application/MRTG will present the community string to that device when it requests statistics. If the community string is not correct, the device will not respond....
How do I generate an SNMP community string for MRTG?
1,565,440,662,000
Trying to install the last version of OpenNMS in the last stable version of Debian, following the official installation instructions leads to an apt error with the repositories: root@triplecero:~# apt update Ign:1 http://nightly.odoo.com/12.0/nightly/deb ./ InRelease Ign:2 https://debian.opennms.org stable InRelease H...
As I thought the apt error message was caused by an error in the OpenNMS repository: two deb packages with libraries needed, jicmp and jicmp6, weren't available in the repository for i386 architectures. After reporting the issue the packages are now available and the opennms package can be installed fulfilling the de...
OpenNMS 24 installation in Debian Buster: unmet dependencies
1,565,440,662,000
Sorry if this is a repeat. I searched, but with no luck. I'm using SNMPd on an openwrt/wr host with some ppp and tun connections. These connections get IDs in the if table, and will actually get a new ID whenever the tunnels reconnect. Nagios (check_mk), when that happens, complains that an interface went down; oh, a...
I figured it out, so here's an answer so I'm not denvercoder9. Adding a new config to the snmpd.conf seems to have both trimmed the interface march as well as prevented complaints. interface_replace_old yes I think that's the one. It could be that simple. Try it yourself if you run into the same problem, and let me ...
Nagios/SNMP - devices alerting when ppp/tun connections cycle
1,565,440,662,000
On our servers ( debian, centos and ubuntu ) we set in snmpd.conf extend .1.3.6.1.4.1.2021.7890.1 distro "/bin/cat /etc/debian_version" This way a centralized monitor ( Observium ) read the OS distro. Fine. I read http://net-snmp.sourceforge.net/docs/man/snmpd.conf.html but ... The OID .1.3.6.1.4.1.2021.7890.1 is a d...
Sorry, it is a snmp' extensions newbie question. To use variables and stdin stdout redirection, you have to prepend the command with the interpreter, like this : extend .1.3.6.1.4.1.2021.7890.2 purpose "/bin/cat /sys/devices/virtual/dmi/id/product_name"
setting snmp extensions in snmpd.conf
1,565,440,662,000
I have a bash script that is running an SNMPGET of two values. I want to take the results and put them in an array. Here is the code: OUTPUT=`snmpget -v2c -c public -Oqv 192.168.0.33' \ ' sysName'\ ' SysLocation' echo $OUTPUT ARRAY=($OUTPUT) echo ${ARRAY[0]} echo $OUTPUT returns "Private Network" "Server 4 ". When ...
Replace ARRAY=($OUTPUT) by eval ARRAY=($OUTPUT)
Setting Qualifiers for Bash Array
1,565,440,662,000
The following is net-snmp output and as you see, diskIOLA is not availabe: SNMP table: UCD-DISKIO-MIB::diskIOTable diskIOIndex diskIODevice diskIONRead diskIONWritten diskIOReads diskIOWrites diskIOLA1 diskIOLA5 diskIOLA15 diskIONReadX diskIONWrittenX 25 sda 845276160 2882477056 576632 ...
The information you indicate you have is not enough to calculate disk utilization %. Disk utilization % is calculated as disk_time_spent_in_io / elapsed_time. For example, if your disk spends 0.25 seconds performing IO in a 1 second period, then your disk is 25% utilized. The number of operations is meaningless when i...
How to calculate disk IO load percentage?
1,565,440,662,000
I am trying to secure my latest hosting server, and realized that cupsd is running. After checking, it's running on all of my servers. In the name of security, I decided to permanently disable this service as I don't have the needs for printing services. Before I do this however, I want to make sure the my SNMP servic...
No! I haven't checked but maybe cups can deliver values by snmp, but it certainly doesn't need it.
Does snmp rely on or need CUPS to function?
1,565,440,662,000
I am new to SNMP and I can not find any clear article if we can add a data node in SNMP. And also how does SNMP collect data? I want to monitor the following resources, which can be obtained from SAR report. So, please tell me how to add this under SNMP or at least how does SNMP collect data, so that I will try to fi...
The Net-SNMP package supplied with RedHat is actually a very flexible monitoring agent, which will get values for all of the metrics you listed by default out of the box. However, it's old: the SNMP protocol itself has been around for over two decades, with significant improvements made over that span. The learning cu...
How to use SNMP to get any information that we need in Redhat?
1,565,440,662,000
I'm trying to know if some GUI process is idle o minimized in Linux, using Net-SNMP. I've been doing research and as far as I know, SNMP seems to be designed for monitoring services, not processes run by regular users. I've found just one MIB object, hrSWRunStatus (RFC 2790), which has only four running statuses: runn...
In the comments you said you want to develop a time tracking app, for tracking application usage. I guess you might do it by tracking which window is the active one at any given time. To do that, you would need to get access to the user's X11 session, and then repeatedly query its X11 property named _NET_ACTIVE_WINDO...
Identify idle or minimized process
1,565,440,662,000
I have just started to learn snmp protocols and notably centreon program and I need to find out how can a verify a web-page response or a web-service (in HTTP simple mode). The only sourse I have found for a moment is this one https://documentation-fr.centreon.com/docs/plugins-packs/fr/1.x/catalog.html where it says...
Centreon can collect information on various systems using SNMP, that's true. But Centreon plugins can add other methods for collecting information. In particular, App-Protocol-HTTP adds the ability to make HTTP queries and check for: response time presence of a specific string in a HTTP response presence of specific...
How can I verify the web page response using snmp? [closed]
1,565,440,662,000
I'm trying to get my Ubuntu LTS 16.04 server to send SNMPTraps to my HP OVO server. The reason for this is that there are legacy devices on the network that cannot send an SNMP warning upon failure, but can still be accessed through a network-connected card. Due to this, my Ubuntu server connects to that card to asses...
snmptrap doesn't require any specific configurations. To validate whether your script is sending traps, you can use tcpdump to watch traffic. SNMP traps are UDP and usually destined for port 162, so this will work: tcpdump -i <interface> udp dst port 162 Then, in another screen or terminal, test your snmptrap comman...
snmptrap - underlying config?
1,565,440,662,000
We have a process(should be a client program) on RHEL 7.4 that send snmp traps to a Solaris server that has trap receiver process(should be a server program listening on 162/1691) on other machine(with IP 10.xx.xx.xx) I have ssh access to RHEL box Can I install some tracing tool on RHEL 7.4 to trace these snmp traps?...
If you are running this on your client machine ,You can use example below tcpdump dst 10.xx.xx.xx and port 162 and not arp you can replace the IP suites to you and not arp part for exclude the arp. If you are running this on server side you can replace dst with src and client ip instead server ip
How to trace snmp trap sent by client process?
1,565,440,662,000
I created an SNMP initial user several years ago for a project and have forgotten the password. Is there a way to reset net-snmp back to default, no users, and recreate the initial user and subsequent users? This is on Solaris 10 and Solaris 11.3.
There is a configuration file that is maintained by net-snmp itself. On Debian Linux, it's at /var/lib/snmp/snmpd.conf; I don't know exactly where net-snmp puts it on Solaris. But that file contains user definitions as long lines, starting with the usmUser keyword. The user's password will be stored in encrypted/hashe...
Reset net-snmp? I've forgotten the initial user password
1,565,440,662,000
If I want to find out network traffic on my Linux Servers using SNMP. I use the ifOutOctets. and ifInOctets. OIDs in an snmpget request. Where do these OIDs get the data from? I tried looking at the rfc for these OIDs but I'm still none the wiser. https://www.rfc-editor.org/rfc/rfc3635#section-3.2.5
It depends. For ifOutOctets and ifInOctets, snmpd probably gets them by querying the kernel (either directly via a syscall or perhaps by examining /proc/net/dev). For other OIDs, it may get them by running an external command to extract and process the data before returning it. e.g. see Extending snmpd using shell s...
Where does SNMP OIDs get the data from?
1,565,440,662,000
I'm currently on php 5.6 on CentOS 6.7. I'm trying to install yum install php-snmp I keep getting Loaded plugins: fastestmirror, refresh-packagekit, security Setting up Install Process Loading mirror speeds from cached hostfile * base: mirror.atlanticmetro.net * epel: mirror.math.princeton.edu * extras: mirror.5nin...
You've got a non-standard PHP install on your system - either back it out and replace with the CENTOS distribution then install the php-snmp rpm or keep your existing PHP and yum install php56w-snmp (note the 'w')
Error installing php-snmp in CentOS
1,565,440,662,000
if it possible to do a snmpwalk in a perl script and put the output in a table to make a sort of association like for each hostname i have in the same line if index and desc i have the script in bash but the output that i have don't give the association that i want so i need your help #!/bin/bash Rep_Scripts='/home/s...
First, refactoring your question: How can I construct a perl script that inputs data from snmpwalk for each IP/Host and outputs a table for each OID. Second, your example snmpwalk commands make no sense. It might make sense if OID were a variable. You problably mean to use snmpwalk -v2 -c public ${ip} ${OID} Y...
make a table output
1,365,266,803,000
I connect to a remote ssh server by running this command: ssh -D 12345 [email protected] This creates a socks proxy that I can use with Firefox to bypass censorship in my country. However, I can't take advantage of it to in the command line. Let's say my country blocks access to youtube. How can I use the ssh connect...
Youtube-dl doesn't support a SOCKS proxy. There's a feature request for it, with links to a couple of working proposals. Youtube-dl supports HTTP proxies out of the box. To benefit from this support, you'll need to run a proxy on myserver.com. Pretty much any lightweight proxy will do, for example tinyproxy. The proxy...
How to use socks proxy for commands in Terminal such as youtube-dl?
1,365,266,803,000
There are two SOCKS proxies that I know about that support transparent proxying for any outgoing TCP connection: Tor and redsocks. Unlike HTTP proxies, these SOCKS proxies can transparently proxy any outgoing TCP connection, including encrypted protocols and protocols without metadata or headers. Both of these proxie...
Here is how it does it: static int getdestaddr_iptables(int fd, const struct sockaddr_in *client, const struct sockaddr_in *bindaddr, struct sockaddr_in *destaddr) { socklen_t socklen = sizeof(*destaddr); int error; error = getsockopt(fd, SOL_IP, SO_ORIGINAL_DST, destaddr, &socklen); i...
How does a transparent SOCKS proxy know which destination IP to use?
1,365,266,803,000
To reach an isolated network I use an ssh -D socks proxy. In order to avoid having to type the details every time I added them to ~/.ssh/config: $ awk '/Host socks-proxy/' RS= ~/.ssh/config Host socks-proxy Hostname pcit BatchMode yes RequestTTY no Compression yes DynamicForward localhost:9118 Then I create...
Can /usr/bin/ssh really not accept systemd-passed sockets? I think that's not too surprising, considering: OpenSSH is an OpenBSD project systemd only supports the Linux kernel systemd support would need to be explicitly added to OpenSSH, as an optional/build-time dependency, so it would probably be a hard sell. ...
On-demand SSH Socks proxy through systemd user units with socket-activation doesn't restart as wished
1,365,266,803,000
Is there a way to redirect all traffic, UDP and TCP, coming to and from eth1 and eth2 through a SOCKS proxy (Tor) which then passes it through eth0? eth0: Internet in - leads to the main router, then the cable modem eth1: A USB Ethernet port setup as a modem (I think that's the word I'm looking for, right?) eth2: A US...
First, you need tun2socks (often a part of the 'badvpn' package). tun2socks sets up a virtual interface which you can route traffic through, and that traffic will get sent through the target socks proxy. Setting it up gets a little tricky as you only want to route certain traffic through the tunnel. This script should...
Redirect ALL packets from eth1 & eth2 through a SOCKS proxy
1,365,266,803,000
I have a list of SOCKS proxy servers from this site. I've read about creating a dynamic tunnels with ssh -D and to be honest i've tried that already. Unfortunately for some reason I cannot connect to any of the proxy servers from the list. I am using OpenSSH_5.3p1 Debian-3ubuntu7, OpenSSL 0.9.8k on BackTrack. If anyon...
It sounds to me like you need a socks client, or a ssh client that understand socks. -D is for ssh to be a socks server/proxy. You could use ssh under tsocks, or another SOCKS wrapper. Or use ssh's ProxyCommand in conjunction with socat or nc -X: ssh -o ProxyCommand='socat - socks:B:%h:21,socksport=1080' C To have a ...
SSH over Socks proxy without username or password
1,365,266,803,000
I connect to my Server on the internet using ssh -D 3128 [email protected]. If I am right I thereby open a SOCKS v5 Proxy to my Server. Using Firefox and FoxyProxy I can now add this to my proxys and tunnel my HTTP-Traffic over it. Howsoever I'd like to use this SOCKS Proxy for all my traffic. Friends told me that the...
SOCKS5 is a protocol (i.e. in the application layer of OSI), so plain network-routing (e.g. via iptables) alone won't do. (It's probably necessary, but not sufficient.) What you need is a proxifier. Without having tried it, tun2socks, allowing you to "socksify TCP at the network layer", looks promising (as does proxy...
System wide SOCKS5 Proxy
1,365,266,803,000
Socksify like program for Fedora? Socksify, tsocks, proxycommand in ssh, they're great softwares/solutions that can be used to "constrain" given apps to use a SOCKS5 proxy (what was created with an SSH TUNNEL). Even if the given apps doesn't support to use SOCKS5. But: Are there any solutions to use an SSH Tunnel to "...
Just found a new solution for this recently that is REALLY neat. Take a look at sshuttle. https://github.com/apenwarr/sshuttle/
VPN like solution for SSH Tunneling?
1,365,266,803,000
I got a great answer for my previous question about connecting from Machine A to Machine C via Socks proxy located on Machine B. Say Machine B Ip is 218.62.97.105 and it is listening on port 1080 The command for that: ssh -o ProxyCommand='socat - socks:218.62.97.105:HOST_C:21,socksport=1080' I wonder if it is possible...
With: socat tcp-listen:12345,reuseaddr,fork,bind=127.1 socks:218.62.97.105:11.11.11.11:3128,socksport=1080 you will have a socat waiting for TCP connections on port 12345 on the loopback interface, and forward them to 11.11.11.11:3128 by way of the socks server on 218.62.97.105:1080 You can then use that to connect t...
SSH jumping over socks(4/5) proxy chain. Host -> socks proxy -> socks proxy -> destination
1,365,266,803,000
I have 2 servers, A and B. I want to create a tunnel from my system to server B but I have some limits to do this. So I have to first tunnel to server A and from server A to server B. My goal is to have a SOCKS Proxy to browse the web. How can I do this?
I am showing you a very basic way to do it. Here I am assuming that B is directly accessible from A. There may be variations according to various situations. On A: ssh -D socks_port B This will open up the port socks_port on A as a SOCKS proxy. On your system: ssh -L local_port:localhost:socks_port A This will forwa...
How to create a SSH tunnel over 2 servers?
1,365,266,803,000
Say I have a SOCKS connection at local (established by ssh -D8888). I use this to do many things, like bypass the internet censorship. But sometimes the ssh would unexpectedly broken. Then the socks is down. Is there anything that I can used to check whether the local SOCKS proxy is still alive? Thanks,
You can test the availability of a socks proxy by trying to load a website through the tunnel. curl -sSf --socks5-hostname localhost:8888 www.google.com > /dev/null In the above command, curl will be silent, unless an error occurs. You can wrap this command in a for loop within a script. The return value of curl is z...
is it possible to check whether a local socks proxy works with shell script?
1,365,266,803,000
I want to setup a linux server (Ubuntu Server 14.04 in this case) to be used as a SOCKS5 proxy by software on another client. Now this is pretty easy by running ssh -f -N -D 0.0.0.0:1080 localhost as explained by this guide. This works perfectly. Now what the guide isn't telling me is how do I automatically start the ...
I fixed this by running ssh-keygen as root user (not sudo), not specifying any name for the key (e.g. use default name and location) and not providing any passphrase. Then I made sure the permissions for all files in /root/.ssh were set to 600 and the .ssh folder was set to 700. Then just add the command from the ques...
Run SSH SOCKS5 proxy on system startup
1,365,266,803,000
I have seen questions on tunnelling SSH through multiple machines but I want to tunnel a SOCKS connection. Normally I would use something like ssh -C2qTnN -D 8080 username@remote_machine to make the local port 8080 a SOCKS tunnel through the remote machine. I would like to open a socks connect from my laptop on machin...
So, if I understand correctly, you can ssh from machine 1 to machine 2 but not from your laptop (from which you can ssh to machine 1). So you'd like to have a socks server on machine 1 and use it from your laptop? So looks like all you need is port forward that 8080: run on your laptop: ssh -nL 8080:localhost:8080 mac...
Tunneling SSH through multiple machines (for SOCKS)
1,365,266,803,000
I live in Iran and due to its restrict censorship, We all have extreme difficulties accessing normal sites and services. So I thought this would be a great idea to setup a proxy server on my VPS, so that I myself could find a way around this censorship. Can anyone show me a step by step working tutorial on this matter...
Do you know flossmanuals.net? They've got a great manual on How to Bypass Internet Censorship (also as epub and pdf for offline use -- and note the translations, among others in Farsi). Among many tools and methods, they cover SOCKS proxies. But given a VPS somewhere, the other ways they mention should be considered,...
How can I setup a SOCKS4 or 5 proxy on CentOS 5.8?
1,365,266,803,000
I was going through this post and couldn't quiet follow what is being implied there. A great "feature" I use every day at work: The ability to have SSH listen on port 443 so I can create a tunnel which bypasses my work firewall, allowing me to run a local SOCKS proxy tunneled through SSH to my internet facing ...
That's about it, but you've inverted home and office. The point is that the office firewall rejects outgoing connections other than web traffic. But since HTTPS traffic and SSH traffic are both encrypted, it can't easily distinguish between them, so the firewall just blocks outgoing connections to ports other than 443...
run a local SOCKS proxy tunneled through SSH
1,365,266,803,000
I'm trying to set up a script that I can call up easily in a WM. The idea is to establish a socks tunnel via ssh to a known-good server and then start chromium with the appropriate environment variables...Then, wait until that instance of chromium exits and then unbind the port. The last part is important, because i...
The best I can think of is running pgrep in a loop. If you have more than one chromium running, you can isolate your script in a separate PID namespace with unshare or firejail, for instance.
wait for chromium to exit before continuing a shell script
1,365,266,803,000
I'm experimenting with OpenVPN connections routed through Tor, using pairs of Tor gateway and OpenVPN-hosting VMs. On the server side, link local ports are forwarded to Tor hidden-service ports on the associated gateway VM. On the client side, OpenVPN connects through socks proxies on the associated Tor gateway VM. Th...
Op (I, that is) didn't take this OpenVPN FAQ seriously enough: One of the most common problems in setting up OpenVPN is that the two OpenVPN daemons on either side of the connection are unable to establish a TCP or UDP connection with each other. This is almost [always] a result of: ... A software firewall running on...
How can I configure OpenVPN to wait for slow SOCKS proxies?
1,365,266,803,000
According to the answer of this question, I have my /etc/tsocks.conf containing these lines: path { server = localhost server_port = 1081 reaches = <ip-address-of-server-b>/32 } path { server = localhost server_port = 1082 reaches = <ip-address-of-server-d>/32 } and I have run these two commands: ssh -fND :1081 serve...
How about using two different configuration files for tsocks? According to this manpage, tsocks will read its configuration from the file specified in the TSOCKS_CONF_FILE environment variable. So you could split your tsocks.conf to tsocks.1081.conf and tsocks.1082.conf and then do something like this (bash syntax): $...
how to specify the forwarding port when using multiple tsocks services?
1,365,266,803,000
Background I work on a corporate network that is behind a proxy server. I also work with some remote sites that I am able to access via a bastion / jump host ssh proxy. In my ~/.ssh/config I have a proxy configuration for my SSH tunnels that allows the jumping through our bastion hosts in order to reach the remote la...
I recommend changing your ProxyCommand from using nc to use -W. For example: ProxyCommand ssh -l USERNAME BASTIONHOST1 -W %h:%p That has fewer requirements for the bastion host, so it is less likely to break in case the administrator decides to change how the bastion host is configured. I don't think there is any way...
How to route traffic through different proxy servers based on destination
1,365,266,803,000
I'm trying to setup proxychains on Kali like this : User > Tor > SOCKS5 > Out I've created my SOCKS5 server with danted running on port 1080. I setup an SSH connection on my Kali distrib : ssh -NfD 1080 user@address And I'm able to connect to the SOCKS5 server without trouble. Same when I'm trying to connect to Tor ...
I found an answer to my question and for a visibility purpose I think responding is better than editing. So I wanted to use Tor and a SOCKS5 proxy at the same time using proxychains. There are two ways to achieve that : With dante server Dante server is a SOCKS5 server (and client) with lots of options I don't know ye...
Proxychains, Tor, SSH and Danted. Connection denied
1,365,266,803,000
I have some bridge host, which allows access to protected network. I connect to it using this command: ssh sergius@bridge_host -D 3128 Thus, I can turn on socks proxy in browser and it works. I can login to hosts on that network with this command: ssh -o 'ProxyCommand /bin/nc.openbsd -x localhost:3128 %h %p' sergius@...
I was answering similar question not a long time ago. I didn't try it, but this one should work for you: sshfs -o ProxyCommand="/bin/nc.openbsd --proxy localhost:3128 \ --proxy-type socks5 %h %p" sergius@$host: /home/sergius/work/SSHFS/$host/ The SSHOPT=VAL just the format of option you want to use. You need to rep...
HOWTO: sshfs via socks proxy
1,365,266,803,000
This is slightly different than the other SSH questions I have seen on here so here it goes. I have a complex setup for accessing a web application and unfortunately there is no way around it. Here is the scenario and systems involved (IP addresses anonymized for obvious reasons): System Alpha System Bravo System Char...
Seems like if you can (from A) ssh to C, you can do: user@Alpha:~$ ssh -L1234:localhost:1234 Charlie user@Charlie:~$ ssh -Dlocalhost:1234 Delta ... and at that point, you can have Firefox use localhost:1234 as a SOCKS proxy. The local SSH will proxy that over the tunnel to C, where that ssh is listening as a SOCKS se...
SSH Tunnel Between Multiple Hosts
1,365,266,803,000
I have this situation: server 1: public ip x.x.x.x private ip 192.168.0.1 server 2: private ip 192.168.0.10 The server 1 can reach the internet with both interfaces: ping -i x.x.x.x www.google.com www.google.com is alive ping -i 192.168.0.1 www.google.com www.google.com is alive The server 2 can reach onl...
The -D option enables a SOCKS4/5 server. It is not identical to an normal HTTP/FTP-proxy and need therefore to be interfaced differently. A lot of browser support SOCKS proxies, but usually not via a http_proxy/ftp_proxy environment variable. You can wrap programs, which do not support SOCKS directly, with tsocks. See...
Tunnel HTTP traffic using another machine via SSH
1,365,266,803,000
I use different proxies for different purposes. I use shadowsocks proxy for my general purpose web surfing. For going to bank websites, I disable proxy. For accessing some websites related to my work, I should use an ssh tunnel proxy. So I have a Network Proxy settings GUI opened always and constantly changing betw...
You can write a proxy.pac (Proxy Auto Configure) file/script and configure that in your browser to direct what proxy (if any) to use when. It would look something like: function FindProxyForURL(url, host) { var socksProxy = "SOCKS ip.of.sock.proxy:port"; var workProxy = "PROXY ip.of.work.proxy:port"; var n...
managing multiple proxies in linux