date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,346,971,085,000
I have a huge text file ~ 33Gb and due to its size, I wanted to just read the first few lines of the file to understand how the file is organized. I have tried head, but it took like forever to even finish the run. Is it because in UNIX, head needs to run through the WHOLE file first before it can do anything? If so, is there a faster way to display part of such a file?
This doesn't really answer your question; I suspect the reason head is slow is as given in Julie Pelletier's answer: the file doesn't contain any (or many) line feeds, so head needs to read a lot of it to find lines to show. head certainly doesn't need to read the whole file before doing anything, and it stops reading as soon as it has the requested number of lines. To avoid slowdowns related to line feeds, or if you don't care about seeing a specific number of lines, a quick way of looking at the beginning of a file is to use dd; for example, to see the first 100 bytes of hugefile: dd if=hugefile bs=100 count=1 Another option, given in Why does GNU head/tail read the whole file?, is to use the -c option to head: head -c 100 hugefile
why does it take so long to read the top few lines of my file?
1,346,971,085,000
I need to grab the first lines of a long text file for some bugfixing on a smaller file (a Python script does not digest the large text file as intended). However, for the bugfixing to make any sense, I really need the lines to be perfect copies, basically byte-by-byte, and pick up any potential problems with character encoding, end-of-line characters, invisible characters or what not in the original txt. Will the following simple solution accomplish that or I'd lose something using the output of head? head infile.txt > output.txt A more general question on the binary copy with head, dd, or else is now posted here.
POSIX says that the input to head is a text file, and defines a text file: 3.397 Text File A file that contains characters organized into zero or more lines. The lines do not contain NUL characters and none can exceed {LINE_MAX} bytes in length, including the <newline> character. Although POSIX.1-2008 does not distinguish between text files and binary files (see the ISO C standard), many utilities only produce predictable or meaningful output when operating on text files. The standard utilities that have such restrictions always specify "text files" in their STDIN or INPUT FILES sections. So there is a possibility of losing information.
does head input > output copy all invisible characters to the new file?
1,346,971,085,000
What command could I create that will list the first 4 lines of all the files in a given directory?
[root@xxx httpd]# head -n 4 /var/log/httpd/* ==> /var/log/httpd/access_log <== xxxx - - [06/Dec/2015:22:22:45 +0100] "GET / HTTP/1.1" 200 7 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/45.0.2454.99 Safari/537.36 Vivaldi/1.0.303.52" xxxx - - [06/Dec/2015:22:22:46 +0100] "GET /favicon.ico HTTP/1.1" 404 291 "http://195.154.165.63:8001/" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/45.0.2454.99 Safari/537.36 Vivaldi/1.0.303.52" ==> /var/log/httpd/access_log-20151018 <== xxxx - - [12/Oct/2015:14:05:42 +0200] "GET /git HTTP/1.1" 404 281 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:40.0) Gecko/20100101 Firefox/40.0" xxxx - - [12/Oct/2015:14:05:42 +0200] "GET /favicon.ico HTTP/1.1" 404 289 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:40.0) Gecko/20100101 Firefox/40.0" xxxx - - [12/Oct/2015:14:05:43 +0200] "GET /favicon.ico HTTP/1.1" 404 289 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:40.0) Gecko/20100101 Firefox/40.0" xxxx - - [12/Oct/2015:14:06:24 +0200] "GET /git HTTP/1.1" 502 465 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:40.0) Gecko/20100101 Firefox/40.0" ==> /var/log/httpd/access_log-20151115 <== xxxx - - [14/Nov/2015:18:56:04 +0100] "GET / HTTP/1.1" 200 7 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:42.0) Gecko/20100101 Firefox/42.0" xxxx - - [14/Nov/2015:18:56:05 +0100] "GET /favicon.ico HTTP/1.1" 404 291 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:42.0) Gecko/20100101 Firefox/42.0" xxxx - - [14/Nov/2015:18:56:05 +0100] "GET /favicon.ico HTTP/1.1" 404 291 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:42.0) Gecko/20100101 Firefox/42.0" xxxx - - [14/Nov/2015:18:58:28 +0100] "GET /phpmyadmin HTTP/1.1" 403 294 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:42.0) Gecko/20100101 Firefox/42.0" It's a sample of my httpd directory with the command head -n 4 /var/log/httpd/* for the first 4 lines. Replace head -n 4 by head -n 1 for the first lines. And you can replace the directory /var/log/httpd/* with your directory for example, /my/directory/* but don't forget the wildcard at the end (*). This wildcard permits to tell that we want all the (non-hidden) files in the directory.
List the first 4 lines of all the files in a given directory
1,346,971,085,000
Given a file, foo.txt: 1 2 3 4 5 Say we want to change it to contain: 1 2 3 Why does head -n3 foo.txt > foo.txt leave foo.txt empty?
This happens because the > redirection occurs before the head program is started. The > redirection truncates the file if it exist, so when the head is reading a file it is already empty.
Why can't I trim a file using `head`? [duplicate]
1,346,971,085,000
I need to find all files that cointains in the first line the strings: "StockID" and "SellPrice". Here is are some exemples of files : 1.csv : StockID Dept Cat2 Cat4 Cat5 Cat6 Cat1 Cat3 Title Notes Active Weight Sizestr Colorstr Quantity Newprice StockCode DateAdded SellPrice PhotoQuant PhotoStatus Description stockcontrl Agerestricted <blank> 1 0 0 0 0 22 0 RAF Air Crew Oxygen Connector 50801 1 150 <blank> <blank> 0 0 50866 2018-09-11 05:54:03 65 5 1 <br />\r\nA wartime RAF aircrew oxygen hose connector.<br />\r\n<br />\r\nAir Ministry stamped with Ref. No. 6D/482, Mk IVA.<br />\r\n<br />\r\nBrass spring loaded top bayonet fitting for the 'walk around' oxygen bottle extension hose (see last photo).<br />\r\n<br />\r\nIn a good condition. 2 0 <blank> 1 0 0 0 0 15 0 WW2 US Airforce Type Handheld Microphone 50619 1 300 <blank> <blank> 1 0 50691 2017-12-06 09:02:11 20 9 1 <br />\r\nWW2 US Airforce Handheld Microphone type NAF 213264-6 and sprung mounting Bracket No. 213264-2.<br />\r\n<br />\r\nType RS 38-A.<br />\r\n<br />\r\nMade by Telephonics Corp.<br />\r\n<br />\r\nIn a un-issued condition. 3 0 <blank> 1 0 0 0 0 22 0 RAF Seat Type Parachute Harness <blank> 1 4500 <blank> <blank> 1 0 50367 2016-11-04 12:02:26 155 8 1 <br />\r\nPost War RAF Pilot Seat Type Parachute Harness.<br />\r\n<br />\r\nThis Irvin manufactured harness is 'new old' stock and is unissued.<br />\r\n<br />\r\nThe label states Irvin Harness type C, Mk10, date 1976.<br />\r\nIt has Irvin marked buckles and complete harness straps all in 'mint' condition.<br />\r\n<br />\r\nFully working Irvin Quick Release Box and a canopy release Irvin 'D-Ring' Handle.<br />\r\n<br />\r\nThis harness is the same style type as the WW2 pattern seat type, and with some work could be made to look like one.<br />\r\n<br />\r\nIdeal for the re-enactor or collector (Not sold for parachuting).<br />\r\n<br />\r\nTotal weight of 4500 gms. 3 0 2.csv : id user_id organization_id hash name email date first_name hear_about 1 2 15 <blank> Fairley [email protected] 1129889679 John 0 I only want to find the files that contains on 1st line : "StockID" and "SellPrice" ; So in this exemple, i want to output only ./1.csv I manage to do this, but i`m stuck now ;( where=$(find "./backup -type f) for x in $where; do head -1 $x | grep -w "StockID" done
find + awk solution: find ./backup -type f -exec \ awk 'NR == 1{ if (/StockID.*SellPrice/) print FILENAME; exit }' {} \; In case if the order of crucial words may be different - replace pattern /StockID.*SellPrice/ with /StockID/ && /SellPrice/. In case of huge number of files a more efficient alternative would be (processing a bunch of files at once; the total number of invocations of the command will be much less than the number of matched files): find ./backup -type f -exec \ awk 'FNR == 1 && /StockID.*SellPrice/{ print FILENAME }{ nextfile }' {} +
Search recursive for files that contains a specific combination of strings on the first line
1,346,971,085,000
When watching the last few lines of a text files for changes, I can use tail -f to continue updating my display. How can I achieve the same thing with head? Is there some solution which behaves like head -n 10 -f <filename>?
The watch linux command executes a program periodically. Maybe you can use this command with the firsts 10 lines for getting the result of command head. Example: watch head -10 <filename> I hope can help you.
How can I display the first few line of a file with updates?
1,346,971,085,000
I've been looking around and haven't found what I'm trying. I have to say I'm petty poor with grep, sed and awk though. I have an alias: alias upgradable='apt list --upgradable' and it gets me what I need: thunderbird/bionic-updates,bionic-security 1:68.4.1+build1-0ubuntu0.18.04.1 amd64 [upgradable from: 1:68.2.2+build1-0ubuntu0.18.04.1] thunderbird-gnome-support/bionic-updates,bionic-security 1:68.4.1+build1-0ubuntu0.18.04.1 amd64 [upgradable from: 1:68.2.2+build1-0ubuntu0.18.04.1] however I'd like to get only the first word, the header of it. Tried lots of stuff but all failed. How do I have to proceed ?
To print everything before the first / you can use cut: alias upgradable='apt list --upgradable | cut -d'/' -f1 or awk: alias upgradable="apt list --upgradable | awk -F'/' '{print \$1}'"
Print first word of the output
1,346,971,085,000
I am using a ffi for nodejs, which for the most part has nothing to do with this question, which is really about understanding pipes better, but does offer some context. function exec(cmd) { var buffer = new Buffer(32); var result = ''; var fp = libc.popen('( ' + cmd + ') 2>&1', 'r'); var code; if (!fp) throw new Error('execSync error: '+cmd); while( !libc.feof(fp) ){ libc.fgets(buffer, 32, fp) result += buffer.readCString(); } code = libc.pclose(fp) >> 8; return { stdout: result, code: code }; } which brings me to this bit of code that, when I run using this exec function tr -dc "[:alpha:]" < /dev/urandom | head -c ${1-8} I get the error: write error: Broken pipe tr: write error but I do get the output I expect: 8 random numbers. This confused the hell out of me but then in some wild googling I found this stack answer which perfectly fit my situation. I am left with some questions, though. Why does: tr -dc "[:alpha:]" < /dev/urandom | head -c ${1-8} throw a broken pipe error when called with my exec command but not when called from the shell? I don`t understand why when I call: tr -dc "[:alpha:]" < /dev/urandom it reads endlessly, but when I pipe it to: head -c ${1-8} It works without throwing a broken pipe error. It seems that head would take what it needs and tr would just read forever. At least it should throw broken pipe; head would consume the first 8 bytes and then tr would still be putting out output and broken pipe would be thrown by tr because head has stopped running. Both situations make sense to me, but it seems that they are some what exclusive to each other. I don't understand what is different between calling: exec(tr -dc "[:alpha:]" < /dev/urandom | head -c ${1-8}) and tr -dc "[:alpha:]" < /dev/urandom | head -c ${1-8} directly from the command line, and specifically why < an endless file into something and then | it to something makes it not run endlessly. I've been doing this for years and never questioned why it works this way. Lastly, is it OK to ignore this broken pipe error? Is there a way to fix it? Am I doing something wrong in my C++ ish javascript code? Am I missing some kind of popen basics? ------ EDIT messing around some more the code exec('head -10 /dev/urandom | tr -dc "[:alpha:]" | head -c 8') throws no pipe error!
Normally, tr shouldn't be able to write that error message because it should have been killed by a SIGPIPE signal when trying to write something after the other end of the pipe has been closed upon termination of head. You get that error message because somehow, the process running tr has been configured to ignore SIGPIPEs. I suspect that might be done by the popen() implementation in your language there. You can reproduce it by doing: sh -c 'trap "" PIPE; tr -dc "[:alpha:]" < /dev/urandom | head -c 8' You can confirm that's what is happening by doing: strace -fe signal sh your-program (or the equivalent on your system if not using Linux). You'll then see something like: rt_sigaction(SIGPIPE, {SIG_IGN, ~[RTMIN RT_1], SA_RESTORER, 0x37cfc324f0}, NULL, 8) = 0 or signal(SIGPIPE, SIG_IGN) done in one process before that same process or one of its descendants executes the /bin/sh that interprets that command line and starts tr and head. If you do a strace -fe write, you'll see something like: write(1, "AJiYTlFFjjVIzkhCAhccuZddwcydwIIw"..., 4096) = -1 EPIPE (Broken pipe) The write system call fails with an EPIPE error instead of triggering a SIGPIPE. In any case tr will exit. When ignoring SIGPIPE, because of that error (but that also triggers an error message). When not, it exits upon receiving the SIGPIPE. You do want it to exit, since you don't want it carrying on reading /dev/urandom after those 8 bytes have been read by head. To avoid that error message, you can restore the default handler for SIGPIPE with: trap - PIPE Prior to calling tr: popen("trap - PIPE; { tr ... | head -c 8; } 2>&1", ...)
broken pipe error with popen and JS ffi
1,346,971,085,000
I'm trying to download a bunch of web pages, and once I've downloaded N lines of html, I want the whole thing to stop. But instead, the previous steps in the pipe just keep going. An example to see the problem: for i in /accessories /aches-pains /allergy-hayfever /baby-child /beauty-skincare; do echo $i; sleep 2; done | \ while read -r line; do curl "https://www.medino.com$line"; done \ | head -n 2 Now, I want this to make a single request, then abort. But what happens instead is this: % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0<!DOCTYPE html> <html lang="en" > 100 4412 0 4412 0 0 12788 0 --:--:-- --:--:-- --:--:-- 12751 curl: (23) Failed writing body (0 != 2358) % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 2358 0 2358 0 0 3772 0 --:--:-- --:--:-- --:--:-- 3766 curl: (23) Failed writing body (0 != 2358) ( ^ repeats 4 times) Why doesn't the script abort immediately, and instead keeps going? I'm not a super pro on pipes so feel like I'm missing something fundamental here.
The second part of your pipeline is while read -r line; do curl ...$line; done. When this runs: on first iteration shell reads the first value into line, and runs curl; curl (fetches and) outputs the webpage, of which head -n2 extracts the first two lines and exits, closing the pipe between the second and third parts. It appears in your example curl writes this output as at least two blocks, so it gets an error on the second write and fails i.e. exits with nonzero status. shell does not terminate most command sequences (including a compound command) when one command fails, because shells are frequently used interactively, and it would be very inconvenient to have your shell die forcing you to re-login and start over every time you make any mistake running any program. thus the shell reads the second value into line and runs the second curl, which immediately fails because the pipe is closed, but again the shell continues and reads the third line and runs the third curl, etc until end of input causes read to fail; since read is in the list-1 part of while, its failure causes the loop to terminate. You can explicitly test if curl failed (and then terminate) with: generate_values | while read -n line && curl ...$line; do :; done | head -n2 or you can set a shell option so it does terminate on failure: generate_values | { set -e; while read -n line; do curl ...$line; done } | head -n2 Note for both methods it may run one over, because curl reports an error only on a write after the pipe is closed, i.e. after the last block. If your output limit (head -n$n) is exhausted during the last block of output from curl #2, that curl will exit 'success' and the shell will start curl #3 which will fail on its first (or only) write.
aborting previous steps in curl, xargs pipe when head finishes
1,346,971,085,000
So this shell script: #!/bin/bash head >/dev/null; head; almost always gives the same output when called with sequential numbers (e.g. seq 10000 | ./sscript) OUPUT: //blank line 1861 1862 1863 1864 1865 1866 1867 1868 1869 I straced it with strace seq 10000 | ./sscript but wasn't able to explain to myself, where exactly these numbers come from. At the end of strace: write(1, "1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14"..., 4096) = 4096 write(1, "1\n1042\n1043\n1044\n1045\n1046\n1047\n"..., 4096) = 4096 write(1, "\n1861\n1862\n1863\n1864\n1865\n1866\n1"..., 4096) = 4096 write(1, "2680\n2681\n2682\n2683\n2684\n2685\n26"..., 4096) = 4096 write(1, "499\n3500\n3501\n3502\n3503\n3504\n350"..., 4096) = 4096 write(1, "18\n4319\n4320\n4321\n4322\n4323\n4324"..., 4096) = 4096 write(1, "7\n5138\n5139\n5140\n5141\n5142\n5143\n"..., 4096) = 4096 write(1, "\n5957\n5958\n5959\n5960\n5961\n5962\n5"..., 4096) = 4096 write(1, "6776\n6777\n6778\n6779\n6780\n6781\n67"..., 4096) = 4096 write(1, "595\n7596\n7597\n7598\n7599\n7600\n760"..., 4096) = 4096 write(1, "14\n8415\n8416\n8417\n8418\n8419\n8420"..., 4096) = 4096 write(1, "3\n9234\n9235\n9236\n9237\n9238\n9239\n"..., 3838) = 3838 Why only the 3rd write is returned (sometimes only the 2nd)? Actually only the first 10 of the returned line (3rd or 2nd write) are printed because of the second head in script but still lost.
head prints 10 lines by default, but it reads in as much input as it can while doing so - note that GNU head has options which require it to know how many lines there are in the file in total, so reading in as much as it can is not wrong. head reads in as much as it can to fill its buffer, which seems to be 8192 bytes: ~ seq 10000 | strace -fe read ./foo.sh read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\260e\1\0\0\0\0\0"..., 832) = 832 ... Process 17610 attached ... [pid 17610] read(0, "1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14"..., 8192) = 8192 ... [pid 17611] read(0, "\n1861\n1862\n1863\n1864\n1865\n1866\n1"..., 8192) = 8192 ... Since the first two writes are 4096 bytes each, they can be consumed by the first head. This depends on timing. If seq only managed to get one write away by the time the first head printed 10 lines and quit, then the second write will be taken by the second head. The comment from mikeserv is illuminating: you should try it w/ a regular file. seq 10000 >/tmp/nums; yourscript </tmp/nums The reason this behaves as you would expect is that head tries to reposition the current reading point to the line after the ones it had output using lseek(). This works for normal files, redirected files, etc., but doesn't work for pipes: The lseek() function shall fail if: ... ESPIPE The fildes argument is associated with a pipe, FIFO, or socket. As can be seen using strace: ~ seq 10000 | strace -fe lseek ./foo.sh ... Process 18561 attached [pid 18561] lseek(0, -8171, SEEK_CUR) = -1 ESPIPE (Illegal seek) [pid 18561] +++ exited with 0 +++ --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=18561, si_uid=1000, si_status=0, si_utime=0, si_stime=0} --- Process 18562 attached [pid 18562] lseek(0, -8146, SEEK_CUR) = -1 ESPIPE (Illegal seek) ...
Head Script output explanation
1,457,006,052,000
The less command accepts its defaults with an environment variable LESS, so you can export LESS='-F -g -i -M -R -S -w -X -z-4' at the beginning of your session. Is it possible to change the default lines count returned by head and tail in a similar fashion? An alias is not an option, because it breaks explicit option setting (e.g. in a script): head -n 15 -5 fails with an error in both GNU and busybox head at least.
Since the old style options like -5, +5 are only recognised as the first argument, you could do: head() case $1 in ([-+][0-9]*) command head "$@";; (*) command head -n 15 "$@" esac That will affect the heads invoked by your current shell. If you want to affect all head invocations, you'd need to write it as a script that appears first in your $PATH: mkdir -p ~/bin && cat > ~/bin/head << \EOF && #! /bin/sh - case $1 in ([-+][0-9]*) ;; (*) set -- -n 15 "$@" esac exec /usr/bin/head "$@" EOF chmod +x ~/bin/head PATH=~/bin:$PATH export PATH
Is it possible to change the default COUNT value of tail and head?
1,457,006,052,000
Issue I want to be able to : concatenate all files in a directory (regular and hidden), but I would also like to display the title of each file at the beginning of each concatenation. I found some solutions on the web, where I can do #2 with tail -n +1 * 2>/dev/null super neat trick, but it doesn't include hidden files like if I were to do: cat * .* 2>/dev/null or even head * .* 2>/dev/null The cat command will do the trick but doesn't include the filename, and the head command will not print/concatenate the whole contents of each file. Question Is there a way to do what I need to do with tail, if not what is a good substitute to achieve the same result/output. Update with an example The tail command when attempting to concatenate all files (regular and hidden) [kevin@PC-Fedora tmp]$ ls -la total 8 drwx------ 2 user user 4096 Jun 23 09:24 . drwxr-xr-x. 54 user user 4096 Jun 23 08:21 .. -rw-rw-r-- 1 user user 0 Jun 23 09:24 .f1 -rw-rw-r-- 1 user user 0 Jun 23 09:24 f1 -rw-rw-r-- 1 user user 0 Jun 23 09:24 .f2 -rw-rw-r-- 1 user user 0 Jun 23 09:24 f2 -rw-rw-r-- 1 user user 0 Jun 23 09:24 .f3 -rw-rw-r-- 1 user user 0 Jun 23 09:24 f3 -rw-rw-r-- 1 user user 0 Jun 23 09:24 .f4 -rw-rw-r-- 1 user user 0 Jun 23 09:24 f4 -rw-rw-r-- 1 user user 0 Jun 23 09:24 f5 [user@PC-Fedora tmp]$ tail -n +1 * ==> f1 <== ==> f2 <== ==> f3 <== ==> f4 <== ==> f5 <== [user@PC-Fedora tmp]$ tail -n +1 * .* ==> f1 <== ==> f2 <== ==> f3 <== ==> f4 <== ==> f5 <== ==> . <== tail: error reading '.': Is a directory [user@PC-Fedora tmp]$
With zsh and GNU tail (not all tail implementations can take more than one filename arguments, and not all that do will display the file names): () { (($# == 0)) || tail -vn +1 -- "$@" < /dev/null; } *(ND) -v is to still print the filenames even if there's only one file, D for dotglob, N for nullglob, using an anonymous function which is passed the expansion of that glob and checks for the special case of the current directory being empty. </dev/null is to partly mitigate the fact that GNU tail, when passed a filename called -, will read stdin instead. Here, we just prevent it from reading stdin, but it still won't read the file called -. Another approach would be to use "${@/#%-/./-}" in place of "$@" to replace - with ./-, but that would also mean that you'd see ==> ./- <== instead of ==> - <== for the - file (still probably better than ==> standard input <==). You may also want to add the . (or -.) glob qualifier to restrict to regular files only to avoid the errors or worse if there are directories or other types of non-regular files in the current directory (*(ND-.)). Same with ksh93 and GNU tail: ( FIGNORE='@(.|..)' set -- ~(N)* (($# == 0)) || tail -vn +1 -- "$@" < /dev/null ) Same with bash and GNU tail: ( setopt -s nullglob dotglob set -- * (($# == 0)) || tail -vn +1 -- "$@" < /dev/null ) Or with GNU tail and any POSIX sh (including zsh, but only in sh emulation), also restricting to regular or symlink to regular files and replacing - with ./-, but potentially processing the files in a different order as we're doing .* ones before the other ones: ( set -- for file in .* * [ -f "$file" ] || continue [ "$file" = - ] && file=./- set -- "$@" "$file" done [ "$#" -eq 0 ] || exec tail -n +1 -- "$@" ) Or you could use GNU awk (here with zsh): () { (( $# == 0 )) || gawk 'BEGINFILE{ print sep "==> "substr(FILENAME, 3)" <==" sep = "\n" } {print}' "$@" } *(ND-.) awk has the same problem as tail with - and also with filenames that contain =. We work around both by adding that ./ prefix which we strip upon display. Note that awk adds a newline character if missing at the end of the files. Or use a loop: sep= for f (*(ND-.)) { print -r "$sep==> $f <==" sep=$'\n' cat < $f } (cat has the same problem with -, which we work around by passing the file on stdin instead of arguments)
How to properly use tail to concatenate all hidden files [duplicate]
1,457,006,052,000
I'm wondering if it's possible to "extract" the first 5 lines of a textfile to a single variable (not an array) for example: head -5 test.txt >$variable (which of course doesn't work) I'm trying to use zenity to display the first lines so I can confirm / cancel depending on the text displayed zenity --question \ --text=$text (other working solutions are of course appreciated...)
It's as simple as variable=`head -5 test.txt` # or variable=$(head -5 test.txt) Looks like you are not well versed in shell scripting basics. Here's are nice guides: https://mywiki.wooledge.org/BashGuide/Parameters https://www.tldp.org/LDP/Bash-Beginners-Guide/html/
extract 5 first lines of a text file to a variable
1,457,006,052,000
I need head -z for a script (off-topic, but the motivation can be found in this question), but in my CoreOS 835.13.0 I get head: invalid option -- 'z'. Full head --help output: Usage: head [OPTION]... [FILE]... Print the first 10 lines of each FILE to standard output. With more than one FILE, precede each with a header giving the file name. With no FILE, or when FILE is -, read standard input. Mandatory arguments to long options are mandatory for short options too. -c, --bytes=[-]K print the first K bytes of each file; with the leading '-', print all but the last K bytes of each file -n, --lines=[-]K print the first K lines instead of the first 10; with the leading '-', print all but the last K lines of each file -q, --quiet, --silent never print headers giving file names -v, --verbose always print headers giving file names --help display this help and exit --version output version information and exit K may have a multiplier suffix: b 512, kB 1000, K 1024, MB 1000*1000, M 1024*1024, GB 1000*1000*1000, G 1024*1024*1024, and so on for T, P, E, Z, Y. GNU coreutils online help: <http://www.gnu.org/software/coreutils/> Report head translation bugs to <http://translationproject.org/team/> For complete documentation, run: info coreutils 'head invocation' The funny part is that the last line tells me to run info coreutils 'head invocation' but I get info: command not found.
Swap NULs and NLs before and after head: <file tr '\0\n' '\n\0' | head | tr '\n\0' '\0\n' With recent versions of GNU sed: sed -z 10q With GNU awk: gawk -v RS='\0' -v ORS='\0' '{print}; NR == 10 {exit}'
How can I execute an equivalent of `head -z` when I don't have the `-z` option available?
1,457,006,052,000
I use the tail, head and grep commands to search log files. Most of the time the combination of these 3 commands, in addition to using pipe, gets the job done. However, I have this one log that many devices report to literally every few seconds. So this log is very large. But the pattern of the reporting is the same: Oct 10 11:58:50 Received Packet from [xxx.xx.xxx.xx:xxxx]: 0xD 0xD 0xD Oct 10 11:58:50 Unit ID: 1111 In the above example, it shows that UDP packet was sent to the socket server for a specific unit id. Now sometimes I want to view the packet information for this unit within a specific time range by quering the log. Oct 10 11:58:50 Received Packet from [xxx.xx.xxx.xx:xxxx]: 0xD 0xD 0xD Oct 10 11:58:50 Unit ID: 1111 ... // A bunch of other units reporting including unit id 1111 Oct 10 23:58:50 Received Packet from [xxx.xx.xxx.xx:xxxx]: 0x28 0x28 0x28 Oct 10 23:58:50 Unit ID: 1111 So in the example above, I would like to display log output only for Unit ID: 1111 within the time range of 11:58 and 23:58. So the possible results can look like this: Oct 10 11:58:50 Received Packet from [xxx.xx.xxx.xx:xxxx]: 0xD 0xD 0xD Oct 10 11:58:50 Unit ID: 1111 Oct 10 12:55:11 Received Packet from [xxx.xx.xxx.xx:xxxx]: 0x28 0xD 0x28 Oct 10 12:55:11 Unit ID: 1111 Oct 10 15:33:50 Received Packet from [xxx.xx.xxx.xx:xxxx]: 0x33 0xD 0x11 Oct 10 15:33:50 Unit ID: 1111 Oct 10 23:58:50 Received Packet from [xxx.xx.xxx.xx:xxxx]: 0x28 0x28 0x28 Oct 10 23:58:50 Unit ID: 1111 Notice the results only display Unit ID: 1111 information and not the other units. Now the problem with using something like this: tail -n 10000 | grep -B20 -A20 "Oct 10 23:58:50 Unit ID: 1111" is that will display a lot of stuff, not just the stuff that I need.
awk '$3 >= "11:58" && $3 <= "23:58" && /Unit ID: 1111/{print l"\n"$0};{l=$0}'
Search and Filter Text on large log files
1,457,006,052,000
I have two environments: Mac and Linux. I wonder about the head command: it can show just 2 lines of text if I invoke it like so: vim --version | head -2 I saw this online and ran to the man page to learn more only to discover that neither the osx nor the linux pages have any information that I could decipher describing the possibility of using -2 directly to get just 2 lines of text from stdin. So my questions are: am I reading the man pages wrong? if so, what indicates the possibility of specifying -2 directly? if not, is it common for recognised options not to be explicitly outlined in the man page? if not in man, where should I look for full disclosure on a command's options? are there many other convenient features like this, which objectively make head much nicer and direct to use, that I am oblivious to and can't learn about by studying -h output and/or a man page? Mac osx OS X v10.8.3 (build 12D78) Linux linux GNU/Linux (kernel 3.5.0-25-generic) Ubuntu 12.10 coreutils GNU coreutils 8.12.197-032bb
Essentially, you've found the backwards compatibility flags (which, to be honest, I had never known existed.) From the man page: SEE ALSO The full documentation for head is maintained as a Texinfo manual. If the info and head programs are properly installed at your site, the command info coreutils 'head invocation' At the bottom of the info coreutils page: For compatibility `head' also supports an obsolete option syntax `-COUNTOPTIONS', which is recognized only if it is specified first. COUNT is a decimal number optionally followed by a size letter (`b', `k', `m') as in `-c', or `l' to mean count by lines, or other option letters (`cqv'). Scripts intended for standard hosts should use `-c COUNT' or `-n COUNT' instead. If your script must also run on hosts that support only the obsolete syntax, it is usually simpler to avoid `head', e.g., by using `sed 5q' instead of `head -5'. An exit status of zero indicates success, and a nonzero value indicates failure.
head command options and reading man files
1,457,006,052,000
OK, so, I have this non-functional shell script, which I am rewriting piece by piece in python, except I am getting an "unexpected "|" error" from the shell (see below): #/bin/sh LINES=`cat $@ | wc -l` for i in `seq 1 $lines`; do head -n $i $@ | tail -n 1 | text2wave -o temp.wav sox "otherstuff.wav" "temp.wav" "silence.wav" "output.wav" mv output.wav otherstuff.wav rm -rf temp.wav done Which isn't really feasible in practice. But if I know the number of lines in a file, I can run it on a particular file to TTS the entire file and insert 10 seconds of silence between each line, because I don't have to say LINES=`cat $@ | wc -l` In the interest of flow control, and a way of incorporating the line count into a script I can use everywhere, am going to use Python to do this job. So far I have this fragment, also non-functional: import linecache, os for i in range(linelength): lineone = linecache.getline(filename, i) os.system("echo " + lineone + "|" + "festival --tts") which gives this error in the IPython interpreter: d 68. sh: 2: Syntax error: "|" unexpected Out[60]: 512 d 67. sh: 2: Syntax error: "|" unexpected Out[60]: 512 c 52. sh: 2: Syntax error: "|" unexpected Out[60]: 512 c 42. sh: 2: Syntax error: "|" unexpected Out[60]: 512 c 71. sh: 2: Syntax error: "|" unexpected Out[60]: 512 c 51. sh: 2: Syntax error: "|" unexpected Out[60]: 512 c 19. sh: 2: Syntax error: "|" unexpected Out[60]: 512 c 18. sh: 2: Syntax error: "|" unexpected Out[60]: 512 b 16. sh: 2: Syntax error: "|" unexpected Out[60]: 512 b 15. sh: 2: Syntax error: "|" unexpected Out[60]: 512 b 1. sh: 2: Syntax error: "|" unexpected Out[60]: 512 d 16. sh: 2: Syntax error: "|" unexpected Out[60]: 512 d 14. sh: 2: Syntax error: "|" unexpected Out[60]: 512 a 96. sh: 2: Syntax error: "|" unexpected Out[60]: 512 a 95. sh: 2: Syntax error: "|" unexpected Out[60]: 512 a 35. sh: 2: Syntax error: "|" unexpected Out[60]: 512 a 25. sh: 2: Syntax error: "|" unexpected Out[60]: 512 d 74. sh: 2: Syntax error: "|" unexpected Out[60]: 512 d 83. sh: 2: Syntax error: "|" unexpected Out[60]: 512 a 9. sh: 2: Syntax error: "|" unexpected Out[60]: 512 d 9. sh: 2: Syntax error: "|" unexpected Out[60]: 512 b 97. sh: 2: Syntax error: "|" unexpected Out[60]: 512 b 99. sh: 2: Syntax error: "|" unexpected Out[60]: 512 b 76. sh: 2: Syntax error: "|" unexpected Out[60]: 512 b 77. sh: 2: Syntax error: "|" unexpected Out[60]: 512 d 89. sh: 2: Syntax error: "|" unexpected Out[60]: 512 d 99. sh: 2: Syntax error: "|" unexpected Out[60]: 512 b 94. sh: 2: Syntax error: "|" unexpected Out[60]: 512 d 54. sh: 2: Syntax error: "|" unexpected Out[60]: 512 d 66. sh: 2: Syntax error: "|" unexpected Out[60]: 512 c 81. sh: 2: Syntax error: "|" unexpected Out[60]: 512 c 61. sh: 2: Syntax error: "|" unexpected Out[60]: 512 and which replicates for i in `seq 1 $lines`; do head -n $i $@ | tail -n 1 | text2wave -o temp.wav but is handier for testing everything, because it simply reads it out (festival and text2wave are part of the same package, one reads things out and the other writes files)... now, the number of lines being already retrieved and stored in linelength (I didn't have a problem getting python to do that): If it were simply for i in range(linelength): lineone = linecache.getline(filename, i) os.system("echo somestuffnotaline | festival --tts") then festival would say "somEhstuffnotaLINE", but I would not be as happy as I would if it would say, "c 62" -- "d 74" -- etc, those being the contents of each line in the files I am processing.
Your question is long and rambling and I don't know what you expect for an answer. Going by your title, I think your focus is on this fragment of Python code: lineone = linecache.getline(filename, i) os.system("echo " + lineone + "|" + "festival --tts") Your problem is that lineone is the whole line, including the final newline. You need to, as they say in Perl land, chomp it. lineone = linecache.getline(filename, i).rstrip('\n') os.system("echo " + lineone + "|" + "festival --tts") Your first shell script looks awfully complicated and slow for what it does. Why are you computing the number of lines, then retrieving the lines by number in order? You could simply read the input one line at a time, like you do in Python. while IFS= read -r line; do echo "$line" | text2wave -o temp.wav sox "otherstuff.wav" "temp.wav" "silence.wav" "output.wav" mv output.wav otherstuff.wav rm temp.wav done You should be able to simplify this further by using raw audio files, that do not contain a header and so can be concatenated: while IFS= read -r line; do echo "$line" | text2wave -otype raw >>otherstuff.raw cat silence.raw >>otherstuff.raw done sox … otherstuff.raw otherstuff.wav You'll need to tell sox what parameters (such as sampling depth) the raw audio file is encoded in.
How to execute this particular shell command from Python?
1,457,006,052,000
For a file containing 20 lines, lines 6-10 can be printed using following command: head -10 filename | tail -5 Can this exactly same thing be done without using 'head' and 'tail' commands ?? Please comment the link if similar question already exists.
sed would work well here seq 20 | sed '6,10!d' 6 7 8 9 10 You could use this as well: sed -n '6,10p' Or awk, awk '6 <= NR && NR <= 10'
Command to print few consecutive lines from middle of a file [duplicate]
1,457,006,052,000
I have 11 files with spaces in its name in a folder and I want to copy the newest 10. I used ls -t | head -n 10 to get only the newest 10 files. When I want to use the expression in a cp statement I get an error that the files could not be found because of the space in the name. E.g.: cp: cannot stat ‘10’: No such file or directory for a file named 10 abc.pdf The cp statement: cp $(ls -t | head -n 10) ~/... How do I get this working?
If you're using Bash and you want a 100% safe method (which, I guess, you want, now that you've learned the hard way that you must handle filenames seriously), here you go: shopt -s nullglob while IFS= read -r file; do file=${file#* } eval "file=$file" cp "$file" Destination done < <( for f in *; do printf '%s %q\n' "$(date -r "$f" +'%s')" "$f" done | sort -rn | head -n10 ) Let's have a look at its several parts: for f in *; do printf '%s %q\n' "$(date -r "$f" +'%s')" "$f" done This will print to stdout terms of the form Timestamp Filename where timestamp is the modification date (obtained with -r's option to date) in seconds since Epoch and Filename is a quoted version of the filename that can be reused as shell input (see help printf and the %q format specifier). These lines are then sorted (numerically, in reverse order) with sort, and only the first 10 ones are kept. This is then fed to the while loop. The timestamps are removed with the file=${file# *} assignment (this gets rid of everything up to and including the first space), then the apparently dangerous line eval "file=$file" gets rid of the escape characters introduced by printf %q, and finally we can safely copy the file. Probably not the best approach or implementation, but 100% guaranteed safe regarding any possible filenames, and gets the job done. Though, this will treat regular files, directories, etc. all the same. If you want to restrict to regular files, add [[ -f $f ]] || continue just after the for f in *; do line. Also, it will not consider hidden files. If you want hidden files (but not . nor .., of course), add shopt -s dotglob. Another 100% Bash solution is to use Bash directly to sort the files. Here's an approach using a quicksort: quicksort() { # quicksorts the filenames passed as positional parameters # wrt modification time, newest first # return array is quicksort_ret if (($#==0)); then quicksort_ret=() return fi local pivot=$1 oldest=() newest=() f shift for f; do if [[ $f -nt $pivot ]]; then newest+=( "$f" ) else oldest+=( "$f" ) fi done quicksort "${oldest[@]}" oldest=( "${quicksort_ret[@]}" ) quicksort "${newest[@]}" quicksort_ret+=( "$pivot" "${oldest[@]}" ) } Then, sort them out, keep the first 10 ones, and copy them to your destination: $ shopt -s nullglob $ quicksort * $ cp -- "${quicksort_ret[@]:0:10}" Destination Same as the previous method, this will treat regular files, directories, etc. all the same and skip hidden files. For another approach: if your ls has the i and q arguments, you can use something along these lines: ls -tiq | head -n10 | cut -d ' ' -f1 | xargs -I X find -inum X -exec cp {} Destination \; -quit This will show the file's inode, and find can perform commands on files refered to by their inode. Same thing, this will also treat directories, etc., not just regular files. I don't really like this one, as it relies too much on ls's output format…
Use output from head to copy files with spaces
1,457,006,052,000
I have many CSV files in one directory which have various lengths. I'd like to put the second to last line of each file into one file. I tried something like tail -2 * | head -1 > file.txt, then realized why that doesn't work. I'm using BusyBox v1.19.4. Edit: I do see the similarity with some other questions, but this is different because it's about reading multiple files. The for loop in Tom Hunt's answer is what I needed and hadn't thought of before.
for i in *; do tail -2 "$i" | head -1; done >>file.txt That should be sh (and hence Busybox) compatible, but I don't have a non-bash available for testing ATM. Edited in accord with helpful comments.
How can I print the second to last line of many files into one file? [duplicate]
1,457,006,052,000
Ultimately I need to grab lines 3-53 from each CSV in each subdirectory. I grabbed the lines from one file like so ('cat' isn't necessarily required): cat /[path]/[file].csv | head -53 | tail -51 and the files I need like this ('find' is required): find /[path]/ -name "*.csv" The problem is I'm having trouble linking the two together. Can someone nudge me in the right direction?
Try this: find /path/to/file/ -maxdepth 1 -type f -name '*.csv' -print0 | while read -d '' -r file; do sed -n '3,53p' $file; done Notice print0 option which take care of any possible whitespace characters in the file names.
Pulling lines n to m from list of files found
1,457,006,052,000
I want to read a big log file with 10000 lines everytime. and I use this command: tail -c +offset somefile | head -n 10000 my question is, with the piping of head, will tail read the whole file? updated: I know that tail will read from the offset pos. what I want to know is that: if the file has 20000 lines left from the offset, and the head will just print first 10000, will the tail read the left 20000 lines? or just read the first 10000 lines?
tail will die of a SIGPIPE signal as soon as it tries to write to the pipe when it has no reader. So tail will die soon after head has finished outputting its 10000 lines and exited. Because pipes can hold some data (64kiB on Linux) and because tail buffers its output when not to a terminal (8kiB in my test) and head and tail read their input in chunks (of up to 8kiB in my tests), tail may read up to 64 + 8 + 8 kiB after the end of the 10000th line after offset before dying. The worst case scenario is when head is slower to empty the pipe than tail is to fill it up (for instance if writing to a slow output like a terminal) and if the last byte of the 10000th line is at the start of a 8kiB block. Then, in the end, head will have read the block with that last newline at the beginning (8191 more bytes than necessary), but is busy writing its output. tail, during that time has filled up the pipe (64kiB), has read another 8kiB block, but as the pipe is full is blocked writing to it. As soon as head writes its last line, and exits, tail will die but will have read 64kiB + 8 kiB + 8191 bytes past the last of those 10000 lines. The best case scenario is when the last of the 10000 lines is at the very end of a 8kiB block and head reads data as soon as it's put in the pipe by tail. Then, if on the last block, head manages to read it from the pipe and exit before tail writes the next one, then tail will die upon writing the next block, so it will read only 8192 bytes past the end of that 10000th line. That assumes somefile is a regular file. It could terminate sooner if somefile was some king of special file that trickles one byte at a time for instance (like a pipe from a while :; do echo; done)
will tail with piping read whole file?
1,457,006,052,000
I'm making a script to preform "dig ns google.com" and cut off the all of the result except for the answers section. So far I have: #!/bin/bash echo -n "Please enter the domain: " read d echo "You entered: $d" dr="$(dig ns $d)" sr="$(sed -i 1,10d $dr)" tr="$(head -n -6 $sr)" echo "$tr" Theoretically, this should work. The sed and head commands work individually outside of the script to cut off the first 10 and last 6 respectively, but when I put them inside my script sed comes back with an error and it looks like it's trying to read the variable as part of the command rather than the input. The error is: sed: invalid option -- '>' So far I haven't been able to find a way for it to read the variable as input. I've tried surrounding it in "" and '' but that doesn't work. I'm new to this whole bash scripting thing obviously, any help would be great!
sed takes its input from stdin, not from the command line, so your script won't work either theoretically or practically. sed -i 1,10d $dr does not do what you think it does...sed will treat the value of "$dr" as a list of filenames to process. Try echo "$dr" | sed -e '1,10d' or sed -e '1,10d' <<<"$dr". BTW, you must use double-quotes around "$dr" here, otherwise sed will not get multiple lines of input separated by \n, it will only get one input line. Or better yet, to get only the NS records, replace all of your sed commands with just this one command: tr=$(echo "$dr" | sed -n -e '/IN[[:space:]]\+NS[[:space:]]/p') Alternatively, eliminate all the intermediate steps and just run this: tr=$(dig ns "$d" | sed -n -e '/IN[[:space:]]\+NS[[:space:]]/p') Or you can get just the nameservers' hostnames with: tr=$(dig ns "$d" | awk '/IN[[:space:]]+NS[[:space:]]/ {print $5}') BTW, the output of host -t ns domain.example.com may be easier to parse than the output of dig.
How can I get my bash script to remove the first n and last n lines from a variable?
1,457,006,052,000
Assume I have a file with about 10000 lines. How can I print 100 lines, starting from line 1200 to line 1300?
With awk this would be awk 'NR >= 1200 && NR <= 1300' with sed: sed -n '1200,1300 p' FILE with head and tail: head -n 1300 FILE | tail -n 100 so many options, so many answers on stackexchange :)
How to echo from specific line of a file to another specific line [duplicate]
1,457,006,052,000
I have txt file that I have to swap the first paragraph with last one. I did it but now I don't know how to paste everything in a new txt file. This is my command tail -14 gl.txt ; head -n 74 gl.txt | tail -n 68 ; head -5 gl.txt I tried to use > like this tail -14 gl.txt ; head -n 74 gl.txt | tail -n 68 ; head -5 gl.txt > gl_ok.txt but it only takes the last paragraph. How can I do it?
try to grouping the commands within { ...; } and redirect the output at the end to a file: { tail -14 gl.txt ; head -n 74 gl.txt | tail -n 68 ; head -5 gl.txt; } > gl_ok.txt note that the last semi-colon before close bracket is mandatory or group commands can be terminated with a newline like below: { tail -14 gl.txt ; head -n 74 gl.txt | tail -n 68 ; head -5 gl.txt } > gl_ok.txt if your shell is bash, see man bash under "Compound Commands": { list; } list is simply executed in the current shell environment. list must be terminated with a newline or semicolon. This is known as a group command. see also grouping commands using sub-shell ( ... ) and you would do (...) >output
How to paste multiple commands output into single output file
1,457,006,052,000
When I run the command head -n 445 /etc/snort/snort.conf | nl I expect lines 1-445 to be returned. However, only up to line 371 is returned: [snip] 370 preprocessor dcerpc2_server: default, policy WinXP, \ 371 detect [smb [139, 445], tcp 35, udp 135, rpc-over-http-server 593], \ What is happening?
The nl utility does not number blank lines by default (and you have blank lines in the input file).
head not returning n lines
1,457,006,052,000
Is there a unix command I can use to stream the items/contents of a directory? Using Node.js, we can read everything into memory with: fs.readdir(dir, (err, items) => {}); but I am looking to stream items, for a very large directory, say with more than 10 million folders/files in it. The tail command is for reading files not folders TMK, so is there some unix utility that can stream the contents of folders?
In Unix you can use the find command to stream files & directories or both. The most basic command is this: $ find . This will stream a list of files & directories which can then be passed through to another command via a pipe, | or you can use find's built in ability to run another command via -exec. $ find . -type f -exec grep <somestring> {} \; Or $ find . -type f | .... If you just want the contents of a single directory you can restrict find via the -depth switch: $ find . -type f -depth 1 | ....
Stream contents of directory instead of reading all items [closed]
1,457,006,052,000
When i ran head file.txt && nl file.txt it did each command in order of occurance (which makes sense). Is it possible to have the head display with numbered lines, so that this: word word word would become this: 1 word 2 word 3 word
head file.txt | nl The | creates a pipeline that takes the output of head file.txt and gives it to nl as its "standard" input. Bare nl without a file name will read its standard input and number it, so you get the output of head numbered as you wanted. Without a pipe providing input, just nl would read input from the terminal that you typed. The pipe is a way of providing that data as though you'd typed it in like that. You can pipe from any command that prints its output, and pipe to any command that reads from the terminal like that, and even pipe several things together: head -n 50 file.txt | nl | tail -n 20 will give you numbered lines 31-50 from the file.
Is it possible to run head & use nl to number the lines?
1,457,006,052,000
This is how I did to extract the first 100000 lines from my big xml file (2gb): head source.xml -n 100000 > part.xml How can I keep splitting them to 100000 line (or specific file size chunks) until the whole file is separated?
You could use split -l lines_per_file --additional-suffix=.xml source.xml part This will read the file source.xml and split it into chunks of lines_per_file lines each. The result will be written into a series of files partaa.xml, partab.xml, partac.xml, ... If you want to use another number of suffix characters, you can use the -a option to specify a number, eg. -a 1 to name the files parta.xml, partb.xml, partc.xml, ... If you want to split to file size chunks instead of line counts, you can use -b size_in_bytes instead of -l lines_per_file. Please note that the resulting files will most likely be invalid XML files (unless you happen to get one file in return, ie. your input had too few lines/bytes to get split).
Split very big xml file into little pieces with specific line number count
1,457,006,052,000
I have a set of .txt file-pairs. In each pair of files, File1 contains a single integer and File2 contains many lines of text. In the script I'm writing, I'd like to use the integer in File1 to specify how many lines to take off the top of File2 and then write those lines to another file. I'm using gnu-parallel to run this on many file-pairs in parallel. It seems like a simple way to do this would be to pass the contents of File1 as the parameter for the -n option of head -- is this possible? I've tried using xargs and cat File1, but neither is working. An example file-pair: File1: 2 File2: AAA BBB CCC DDD Desired output: File3: AAA BBB If I were not using gnu-parallel, I could assign the contents of File1 to a variable (though I don't know if I could pass that into head's -n option?); however, parallel's {} seem to complicate this approach. I can provide more information if needed.
Extending Gilles answer: parallel 'head -n "$(cat {1})" {2}' ::: File1s* :::+ Corresponding_File2s* You probably have a lot of File1s that you want linked to File2s. The :::+ does that.
How to pass the contents of a file to an option/parameter of a function
1,457,006,052,000
I am using the below command on a file to extract few lines based on chr# ( different chromosome numbers). This is just a single file am working on. i have 8 such files and for each file I have to do this for chr(1to 22 and then chrX and chrY) , am not using any loop, I did it invidually , but if you see that I want the header to be intact for each of my output. If I execute individually I get the header in the output but if am running but if I run the script for all the 8 files together which is like 8*24 commands in a script one after another, the output does not have any header. Can you tell me why is this happening? #!/bin/sh # #$ -N DOC_gatk_chr #$ -cwd #$ -e err_DOC_gatk_chr.txt #$ -o out_DOC_gatk_chr.txt #$ -S /bin/sh #$ -M [email protected] #$ -m bea #$ -l h_vmem=25G more S_313_IPS_S7995.coverage.sample_interval_summary | head -n1; more S_313_IPS_S7995.coverage.sample_interval_summary | grep "chr1" > S_313_IPS_S7995.chr1.coverage more S_313_IPS_S7995.coverage.sample_interval_summary | head -n1; more S_313_IPS_S7995.coverage.sample_interval_summary | grep "chr2" > S_313_IPS_S7995.chr2.coverage more S_313_IPS_S7995.coverage.sample_interval_summary | head -n1; more S_313_IPS_S7995.coverage.sample_interval_summary | grep "chr3" > S_313_IPS_S7995.chr3.coverage more S_313_IPS_S7995.coverage.sample_interval_summary | head -n1; more S_313_IPS_S7995.coverage.sample_interval_summary | grep "chr4" > S_313_IPS_S7995.chr4.coverage more S_313_IPS_S7995.coverage.sample_interval_summary | head -n1; more S_313_IPS_S7995.coverage.sample_interval_summary | grep "chr5" > S_313_IPS_S7995.chr5.coverage more S_313_IPS_S7995.coverage.sample_interval_summary | head -n1; more S_313_IPS_S7995.coverage.sample_interval_summary | grep "chr6" > S_313_IPS_S7995.chr6.coverage more S_313_IPS_S7995.coverage.sample_interval_summary | head -n1; more S_313_IPS_S7995.coverage.sample_interval_summary | grep "chr7" > S_313_IPS_S7995.chr7.coverage more S_313_IPS_S7995.coverage.sample_interval_summary | head -n1; more S_313_IPS_S7995.coverage.sample_interval_summary | grep "chr8" > S_313_IPS_S7995.chr8.coverage more S_313_IPS_S7995.coverage.sample_interval_summary | head -n1; more S_313_IPS_S7995.coverage.sample_interval_summary | grep "chr9" > S_313_IPS_S7995.chr9.coverage Am running it as a job with qsub so the structure of the script looks like above. It works if I execute the commands individually but if I run them like this , the header does not get printed in the output file, the ';' is not recognized it seems. I tried to ran it with both qsub filename.sh and sh filename.sh. I found that with sh filename.sh the header gets printed in the console. So definitely the command before';' semi-colon is not being written in the file. How can I get rid of this problem. Desired output: Target total_coverage average_coverage IPS_S7995_total_cvg IPS_S7995_mean_cvg IPS_S7995_granular_Q1 IPS_S7995_granular_median IPS_S7995_granular_Q3 IPS_S7995_%_above_15 chr2:41460-41683 14271 63.71 14271 63.71 56 67 79 100.0 chr2:45338-46352 123888 122.06 123888 122.06 79 123 147 94.6 chr2:218731-218983 11653 46.06 11653 46.06 36 50 55 100.0 chr2:224825-225012 12319 65.53 12319 65.53 57 68 76 100.0 chr2:229912-230090 20983 117.22 20983 117.22 93 120 147 100.0 chr2:230947-231137 22386 117.20 22386 117.20 100 120 139 100.0 chr2:233074-233258 11710 63.30 11710 63.30 54 66 73 100.0 chr2:234086-234300 22952 106.75 22952 106.75 91 113 126 100.0 chr2:242747-242922 20496 116.45 20496 116.45 93 124 142 100.0 chr2:243469-243671 27074 133.37 27074 133.37 126 138 148 100.0 But output am getting is below without the header chr2:41460-41683 14271 63.71 14271 63.71 56 67 79 100.0 chr2:45338-46352 123888 122.06 123888 122.06 79 123 147 94.6 chr2:218731-218983 11653 46.06 11653 46.06 36 50 55 100.0 chr2:224825-225012 12319 65.53 12319 65.53 57 68 76 100.0 chr2:229912-230090 20983 117.22 20983 117.22 93 120 147 100.0 chr2:230947-231137 22386 117.20 22386 117.20 100 120 139 100.0 chr2:233074-233258 11710 63.30 11710 63.30 54 66 73 100.0 chr2:234086-234300 22952 106.75 22952 106.75 91 113 126 100.0 chr2:242747-242922 20496 116.45 20496 116.45 93 124 142 100.0 chr2:243469-243671 27074 133.37 27074 133.37 126 138 148 100.0
You need something like this: { head -n1 S_313_IPS_S7995.coverage.sample_interval_summary; grep "chr1" S_313_IPS_S7995.coverage.sample_interval_summary; } >S_313_IPS_S7995.chr1.coverage or awk 'NR==1 || /chr1/' S_313_IPS_S7995.coverage.sample_interval_summary >S_313_IPS_S7995.chr1.coverage The problem is that the redirection affects only one command. In order to get the output of head and grep in the redirection they have to be grouped. But awk is probably the better choice here.
Problem with grep on multiple files and not getting desired output
1,457,006,052,000
It seems that I've removed head manually from my /usr/bin/ a couple of months ago. Now that I chance to need it I don't have it. How do I reinstall it without reinstalling the whole distro? My environment: Ubuntu 20.04 LTS Desktop.
You can re-install head by re-installing the package which contains it; from a terminal window, run sudo apt reinstall coreutils (In older versions of apt, pre-1.8.0~rc1, run sudo apt install --reinstall coreutils instead.) You can determine which package is involved by running dpkg -S bin/head
I have removed 'head' manually - how do I reinstall it? [duplicate]
1,457,006,052,000
I have a file with 1 million lines. I want to extract lines from line 10001 to 500000 How to do this?
sed is your friend: sed -n '10001,500000p;500001q' Note that 500001q is needed to stop further file processing. Otherwise it will still read the file till the very end. Thanks for hint on this to @Freddy.
How to extract lines knowing start and end lines
1,457,006,052,000
When I run: cat filename | cut -f3 | head -1 I get the following result: apple However when I save this to a file by using: cat filename | cut -f3 | head -1 > newfile I then open this using php with the following: $variable = file_get_contents("newfile"); echo $variable; // PRINTS "apple" But when I do the following the output is 6!!! echo strlen($variable); // PRINTS 6 WHEN I EXPECT 5! $variable = "apple"; echo $variable; // NOW PRINTS 5 Any idea how to avoid this? I need to use this variable in a lookup function and it wont match my lookup due to the extra character which I cannot identify. When I echo the following: $variable = file_get_contents("newfile"); echo "TEST1"; echo "TEST2"; echo $variable; echo "TEST3"; echo "TEST4"; I get the following output: TEST1TEST2apple TEST3TEST4 So it must be printing a new line somehow....?!?
Indeed, that's your \n, that is counted by strlen In PHP, you have rtrim (http://php.net/manual/fr/function.rtrim.php) to remove all \n, \t, \r, \0 & \x0B from the right end of your string.
cat filename | cut -f2 | head -1 > newfile contains more characters than expected
1,457,006,052,000
I would like to print the tail of a file (could be also head or cat in general) to the screen but restrict the number of characters per line. So if a file contains ... abcdefg abcd abcde abcdefgh ... and the maximum number is 5, then the following should be printed: abcde abcd abcde abcde How would I do that?
tail yourfile |cut -c 1-5 ....
echo lines of file - but no more than N characters per line [duplicate]
1,457,006,052,000
Here's a command to move all files whose name begin with 0 into a folder called zero : mv [0]* zero Question: What is a command for moving all files whose contents begin with 0 into a folder called zero? Hopefully, there is a short command doing that also. I know that the first character of the contents of a file is given by head -c 1 filename.
There isn't a command to do this. However, it's a straightforward piece of scripting. Work out how to identify a file by its contents and move it: f=example_file.txt b=$(head -c1 <"$f") [ "$b" = "0" ] && echo mv -- "$f" zero/ Work out how to iterate across all the 100,000 files in the directory: find . -maxdepth 1 -type f -print Or maybe your shell allows you to use a wildcard for a large number of entries and the simpler more obvious loop will work: for f in * do echo "./$f" done Work out how to run the mv code for each possible file: find -maxdepth 1 -type f -exec sh -c ' b=$(head -c1 "$1") [ "$b" = "0" ] && echo mv -- "$1" zero/ ' _ {} \; Or for f in * do [ "$(head -c1 "$f")" = "0" ] && echo mv -- "$f" zero/ done Optimise the find version: find -maxdepth 1 -type f -exec sh -c ' for f in "$@" do [ "$(head -c1 "$f")" = "0" ] && echo mv -- "$f" zero/ done ' _ {} + In all cases remove echo to change the command from telling you what it would do, to doing it.
How to move all files whose contents begin with 0?
1,457,006,052,000
I want to show the 3th and the 7th lines in a file only using commands head and tail (I don't want to show the lines between the 3th and the 7th).
Using the MULTIOS facility in the zsh shell: $ head -n 7 file | tail -n 5 > >( head -n 1 ) > >( tail -n 1 ) line 3 line 7 That is, extract lines 3 through to 7 with head -n 7 file | tail -n 5 and then get the first and last line of that. In bash, this would be equivalent of $ head -n 7 file | tail -n 5 | tee >( head -n 1 ) | tail -n 1 line 3 line 7 which additionally uses tee to duplicate the data.
Show particular lines using only head and tail
1,457,006,052,000
I'm trying to use the following approach to subset the output of a manual: man dig | nl | tail -n +389 | head -n 6 However, the output starts at line 304, not line 389. Doing some research, it seems lines marked as "#####################" are not counted. This is very aggravating, and one of my current books was using this approach to subset number lines. Is there any solution to fix the overlooked lines?
By default, nl doesn’t number blank lines. man dig | nl -ba | tail -n +389 | head -n 6 will show that tail is doing the right thing. -ba instructs nl to number all lines.
"tail" Is Returning the Wrong Requested Number Lines
1,457,006,052,000
I want to practice using head, uniq and cut, for this data here. I know this thread but it focuses too much on cat and external programs. I would like a simpler answer. I want to take interval from the data file, e.g. lines 800-1600. I am not sure if cut is intended for this task with the flags -n and -m. At least, I could not extract lines, just patterns in some byte locations. Running mikeserv's answer Code 1.sh: for i in 2 5 do head -n"$i" >&"$((i&2|1))" done <test.tex 3>/dev/null and data 1 2 3 4 5 6 7 8 9 10 11 Running the code as sh 1.sh gives empty line as an output, although it should give 3 4 5 6 8. What do I understand wrong in Mike's answer? How would you select line intervals in data by sed, tail, head and cut?
for i in 799 800 do head -n"$i" >&"$((i&2|1))" done <infile 3>/dev/null The above code will send the first 799 lines of an lseek()able <infile to /dev/null, and the next 800 lines of same to stdout. If you want to prune those 800 lines for sequential uniques, just append |uniq. In that case, you might also do: sed -ne'800,$!d;1600q;N;/^\(.*\)\n\1$/!P;D' <infile ...but a prudent gambler would put their money on head;head|uniq in a race. If you want to process those lines for totally sorted uniques, then you can still use the first for loop and append instead: ... 3>/dev/null | grep -n '' | sort -t: -uk2 | sort -t: -nk1,1 | cut -d: -f2- In the latter case it's probably worthwhile to export LC_ALL=C first.
select portion of file by line interval [duplicate]
1,457,006,052,000
when I call ls in ~ i get Documents Downloads Templates Desktop Music Videos Public Pictures If i pipe ls to head (e.g. ls | head -30) i get Desktop Documents Downloads Music Pictures Public Templates Videos I am trying to alias ls to ls | head -30 in order to not spam my terminal when doing ls in a big folder. The problem is that i prefer the first formatting (from normal ls). Is there any way I can keep the original formatting while limiting the number of results?
As the man page for ls describes: -C list entries by columns So, alias ls='ls -C | head -30' Beware that such an alias will preclude you from being able to pass any parameters to ls. For example: ls /tmp/ will likely not do what you expect. You may find that a shell function is a better choice than an alias.
keep formatting when piping ls to head
1,457,006,052,000
I can't remember how to append a command to a shell script. I searched for append, add, concat, and more without success. Basically I have belly = tail -n +"$HEAD" "$1" | head -n $((TAIL-HEAD+1)) if [ -z "${NUMBER+x}" ]; then # check if NUMBER exists tail -n +"$HEAD" "$1" | head -n $((TAIL-HEAD+1)) else tail -n +"$HEAD" "$1" | head -n $((TAIL-HEAD+1)) | cat -n fi and it works fine, but I don't like the duplicate logic. I know that I could use a function or eval, but is there a simpler way to do this? In my head I have something like this: belly = tail -n +$HEAD $1 | head -n $((TAIL-HEAD+1)) if [ -z "${NUMBER+x}" ]; then # check if NUMBER exists belly else belly | cat -n fi But it doesn't work. What am I missing?
Actually this may have been answered here: Conditional pipeline That answer takes the shell if/then/else mechanic and uses it to embed logic inside pipeline. You can do that also with the && operator, but it is not so clean to read, so I prefer if. Basically in your case the tail line would be like so (please note also the change of -z to -n): tail -n +"$HEAD" "$1" | head -n $((TAIL-HEAD+1)) | if [ -n "${NUMBER+x}" ]; then cat -n; else cat; fi (this may have some errors, I'm not so keen on Linux shell programming. But you get the point.)
How can I append a command
1,457,006,052,000
Warning: I used these commands on a drive that had nothing on it (/dev/sdb). Do not attempt this on a drive with anything important on it. I was experimenting some, and I discovered that the following works: $ printf 'hi\n' | sudo tee /dev/sdb hi $ sudo head -n 1 /dev/sdb hi $ Neat. Here's where I'm confused. I tried it again with cat (the first command is the same, I replaced the second one with sudo cat /dev/sdb. It printed hi, followed by a newline, and hung. Doing Ctrl + C didn't work to stop it. Bummer. I reasoned that perhaps cat wanted a null (\0) character at the end. So I tried again (printf 'hi\n\0' | sudo tee /dev/sdb), and head worked as before, but cat still hung. How can I get cat to not hang when writing directly to a USB drive? I'm not asking if this is a good idea (it isn't). I'm well aware I could just format the drive and use a text file, but I'm curious why this didn't work as expected. I'm using Debian 11, with a 2 GB flash drive (/dev/sdb).
printf 'hi\n' | sudo tee /dev/sdb copies the standard input (from the pipe) to standard output and to /dev/sdb printf 'hi\n' | sudo cat /dev/sdb copies /dev/sdb to standard output. The output from the pipe is not read by cat. So cat does not hang, it is copying the whole contents of the disk to the terminal, and that takes a while. The pipe doesn't change anything for cat, since the first parameter tells it /dev/sdb is the input.
When storing text on a USB drive, how do I make cat not hang?
1,457,006,052,000
I have a large text file (>200MB). I want to read [n, n+a] bytes across all rows. Suppose there are 1000 rows in the original text file. The output file would be 1000 rows. What I know head -c349 original.text|tail -c28 > output.txt. However, this only outputs one row. How can I iterate though all rows? Example: n = 2 a = 1. Input: 123456 789789 Output: 23 89
The cut command will do it. For example, cut -c 10-12 will print characters 10 to 12 (inclusive) from each line of its input. You can write cut -b 10-12 instead if you really mean bytes rather than characters.
How to get nth to n+ath bytes across all rows form a text file in *nix?
1,457,006,052,000
Possible Duplicate: How can I make iconv replace the input file with the converted output? I'm writing a script to change the content of my hosts file but I got stuck on the head output redirection. If I do head -n -1 hosts >hosts my hosts file will result empty and yes, it has more than 1 line.
That's because your shell truncates the file when you redirect to it. This happens before head gets a chance to read it. You can either use a temporary file: head -n1 hosts > hosts.tmp && mv hosts.tmp hosts Or use sponge from the moreutils packate: head -n1 hosts | sponge hosts
Redirecting head output for update hosts file [duplicate]
1,426,771,187,000
In Bash, how does one do base conversion from decimal to another base, especially hex. It seems easy to go the other way: $ echo $((16#55)) 85 With a web-search, I found a script that does the maths and character manipulation to do the conversion, and I could use that as a function, but I'd have thought that bash would already have a built-in base conversion -- does it?
With bash (or any shell, provided the printf command is available (a standard POSIX command often built in the shells)): printf '%x\n' 85 ​​​​​​​​​​​​​​​​​ With zsh, you can also do: dec=85 hex=$(([##16]dec)) That works for bases from 2 to 36 (with 0-9a-z case insensitive as the digits). $(([#16]dev)) (with only one #) expands to 16#55 or 0x55 (as a special case for base 16) if the cbases option is enabled (also applies to base 8 (0125 instead of 8#125) if the octalzeroes option is also enabled). With ksh93, you can use: dec=85 base54=${ printf %..54 "$dec"; } Which works for bases from 2 to 64 (with 0-9a-zA-Z@_ as the digits). With ksh and zsh, there's also: $ typeset -i34 x=123; echo "$x" 34#3l Though that's limited to bases up to 36 in ksh88, zsh and pdksh and 64 in ksh93. Note that all those are limited to the size of the long integers on your system (int's with some shells). For anything bigger, you can use bc or dc. $ echo 'obase=16; 9999999999999999999999' | bc 21E19E0C9BAB23FFFFF $ echo '16o 9999999999999999999999 p' | dc 21E19E0C9BAB23FFFFF With supported bases ranging from 2 to some number required by POSIX to be at least as high as 99. For bases greater than 16, digits greater than 9 are represented as space-separated 0-padded decimal numbers. $ echo 'obase=30; 123456' | bc 04 17 05 06 Or same with dc (bc used to be (and still is on some systems) a wrapper around dc): $ echo 30o123456p | dc 04 17 05 06
BASH base conversion from decimal to hex
1,426,771,187,000
Unfortunately bc and calc don't support xor.
Like this: echo $(( 0xA ^ 0xF )) Or if you want the answer in hex: printf '0x%X\n' $(( 0xA ^ 0xF )) On a side note, calc(1) does support xor as a function: $ calc base(16) 0xa xor(0x22, 0x33) 0x11
How to calculate hexadecimal xor (^) from shell?
1,426,771,187,000
# dd if=2013-Aug-uptime.csv bs=1 count=1 skip=3 2> /dev/null d # dd if=2013-Aug-uptime.csv bs=1 count=1 skip=0x3 2> /dev/null f Why the second command outputs a different value? Is it possible to pass the skip|seek offset to dd as an hexadecimal value?
Why the second command outputs a different value? For historical reasons, dd considers x to be a multiplication operator. So 0x3 is evaluated to be 0. Is it possible to pass the skip|seek offset to dd as an hexadecimal value? Not directly, as far as I know. As well as multiplication using the operator x, you can suffix any number with b to mean "multiply by 512" (0x200) and with K to mean "multiply by 1024" (0x400). With GNU dd you can also use suffixes M, G, T, P, E, Z and Y to mean multiply by 2 to the power of 20, 30, 40, 50, 60, 70, 80 or 90, respectively, and you can use upper or lower case except for the b suffix. (There are many other possible suffixes. For example, EB means "multiply by 1018" and PiB means "multiply by 250". See info coreutils "block size" for more information, if you have a GNU installation.) You might find the above arcane, anachronistic, and geeky to the point of absurdity. Not to worry: you are not alone. Fortunately, you can just ignore it all and use your shell's arithmetic substitution instead (bash and other Posix compliant shells will work, as well as some non-Posix shells). The shell does understand hexadecimal numbers, and it allows a full range of arithmetic operators written in the normal way. You just need to surround the expression with $((...)): # dd if=2013-Aug-uptime.csv bs=1 count=$((0x2B * 1024)) skip=$((0x37))
passing dd skip|seek offset as hexadecimal
1,426,771,187,000
I would like to know if there is a way of using bash expansion to view all possibilities of combination for a number of digits in hexadecimal. I can expand in binaries In base 2: echo {0..1}{0..1}{0..1} Which gives back: 000 001 010 011 100 101 110 111 In base 10: echo {0..9}{0..9} Which gives back: 00 01 02...99 But in hexadecimal: echo {0..F} Just repeat: {0..F}
You can; you just need to break the range {0..F} into two separate ranges {0..9} and {A..F}: $ printf '%s\n' {{0..9},{A..F}}{{0..9},{A..F}} 00 01 ... FE EF
Bash expansion hexadecimal
1,426,771,187,000
Is there a simple command to reverse an hexadecimal number? For example, given the hexadecimal number: 030201 The output should be: 010203 Using the rev command, I get the following: 102030 Update $ bash --version | head -n1 GNU bash, version 4.3.11(1)-release (x86_64-pc-linux-gnu) $ xxd -version xxd V1.10 27oct98 by Juergen Weigert $ rev --version rev from util-linux 2.20.1
You can convert it to binary, reverse the bytes, optionally remove trailing newlines rev <2.24, and convert it back: $ xxd -revert -plain <<< 030201 | LC_ALL=C rev | tr -d '\n' | xxd -plain 010203 Using $ bash --version | head -n1 GNU bash, version 4.3.42(1)-release (x86_64-redhat-linux-gnu) $ xxd -version xxd V1.10 27oct98 by Juergen Weigert $ rev --version rev from util-linux 2.28.2 This does not work if the string contains 00 (the NUL byte), because rev will truncate the output at that point, or 0a (newline), because rev reverses each line rather than the entire output.
Reverse a hexadecimal number in bash
1,426,771,187,000
SHORT VERSION (TL;DR) I have 2 small one-line files, seemingly identical : $ cat f1 f2 ./cconv.sh 100 EUR USD ./cconv.sh 100 EUR USD But they are not, there is a 1 byte difference in size : $ ls -l f1 f2 (...) 24 oct. 30 16:19 f1 (...) 23 oct. 30 16:19 f2 $ diff f1 f2 1c1 < ./cconv.sh 100 EUR USD --- > ./cconv.sh 100 EUR USD I used dhex to figure out the hexadecimal difference. It appears that : f1 finishes with c2 a0 55 53 44 0a f2 finishes with 20 55 53 44 0a Does anybody have a clue what's going on here ? What's the difference, and more importantly, where could it come from ? Here is a link to a zip file containing the 2 files, and a screenshot of the dhex result. LONG VERSION (ADDITIONAL EXPLANATIONS) The 2 files are excerpts from my ~/.bash_history file. I noticed a very strange behavior from my shells. Take the following very basic script : #!/bin/sh echo $# echo "$1" echo "$2" echo "$3" exit 0 Under certain circumstance (but which ones ???), it doesn't take the space as an argument separator : $ ./cconv.sh 100 EUR USD 2 100 EUR USD But sometimes it works just as it is supposed to : $ ./cconv.sh 100 EUR USD 3 100 EUR USD It drives me nuts ! I spent several hours trying to figure out what's going on. Here are some tests I did to try and narrow it down : I work on a laptop with Debian 11, Gnome 3.38. But I happen to also have a virtual machine with exactly the same OS (D11, G3.38), and in the VM everything works just fine. So obviously I must have done something to my bare metal laptop for it to misbehave. But what ??? I noticed that the problem only occurs in a graphical session. If I open a tty (Ctrl+Alt+Fn), it works fine I suspected my terminal emulator. But the behavior is the same in different emulators (I tried Gnome Terminal, Terminator and Konsole, same result) I suspected the shell. But the behavior is the same either with Bash or Dash I disabled all customization I could think of : I temporarily removed /etc/bashrc, /etc/profile, /etc/inputrc and /etc/rc.local I temporarily removed ~/.bashrc, ~/.profile and ~/.inputrc I disabled all Gnome Shell's extensions I even suspected the keyboard, and plugged in a USB keyboard. Same result. I'm really confused, and have not a clue what's going on. I finally noticed that small difference between the 2 commands in ~/.bash_history : one comes from my Gnome session, the other comes from my tty session. Obviously there's a difference, but what is it exactly, and what could be the cause ?
c2 a0 is the UTF-8 encoding of the non-breaking space character. It usually looks like a regular space, but isn't recognized as whitespace by the shell. In a few keymaps, something like AltGr+Space, or Option+Space produces a non-breaking space. Which is amusing if your keymap also has e.g. the pipe character behind AltGr or Option, making it easier to type |<nbsp> instead of |<sp>, giving you errors like this: $ echo foo | grep . bash:  grep: command not found (I think SE folds the nbsp to a regular space, so you probably won't get the error if you copypaste that from here.) If you copied and pasted the arguments from some other tool, you might get odd formatting from there, but it depends on the program. See Deal with nbsp character in shell for some solutions for not producing the character.
Space not taken as an argument separator by shell script (could someone please explain that small file difference ?)
1,426,771,187,000
Say for example I've got this C function: void f(int *x, int *y) { (*x) = (*x) * (*y); } When saved to f.c, compiling with gcc -c f.c produces f.o. objdump -d f.o gives this: f.o: file format elf64-x86-64 Disassembly of section .text: 0000000000000000 <f>: 0: 55 push %rbp 1: 48 89 e5 mov %rsp,%rbp 4: 48 89 7d f8 mov %rdi,-0x8(%rbp) 8: 48 89 75 f0 mov %rsi,-0x10(%rbp) c: 48 8b 45 f8 mov -0x8(%rbp),%rax 10: 8b 10 mov (%rax),%edx 12: 48 8b 45 f0 mov -0x10(%rbp),%rax 16: 8b 00 mov (%rax),%eax 18: 0f af d0 imul %eax,%edx 1b: 48 8b 45 f8 mov -0x8(%rbp),%rax 1f: 89 10 mov %edx,(%rax) 21: 5d pop %rbp 22: c3 retq I'd like it to output something more like this: 55 48 89 e5 48 89 7d f8 48 89 75 f0 48 8b 45 f8 8b 10 48 8b 45 f0 8b 00 0f af d0 48 8b 45 f8 89 10 5d c3 I.e., just the hexadecimal values of the function. Is there some objdump flag to do this? Otherwise, what tools can I use (e.g. awk, sed, cut, etc) to get this desired output?
You can extract the byte values in the text segment with: $ objcopy -O binary -j .text f.o fo The -O binary option: objcopy can be used to generate a raw binary file by using an output target of binary (e.g., use -O binary). When objcopy generates a raw binary file, it will essentially produce a memory dump of the contents of the input object file. All symbols and relocation information will be discarded. The memory dump will start at the load address of the lowest section copied into the output file. The -j .text option: -j sectionpattern --only-section=sectionpattern Copy only the indicated sections from the input file to the output file. This option may be given more than once. Note that using this option inappropriately may make the output file unusable. Wildcard characters are accepted in sectionpattern. The end result is a file (fo) with the binary values of only the .text section, that is the executable code without symbols or relocation information. And then print the hex values of the fo file: $ od -An -v -t x1 fo 55 48 89 e5 48 89 7d f8 48 89 75 f0 48 8b 45 f8 8b 10 48 8b 45 f0 8b 00 0f af d0 48 8b 45 f8 89 10 90 5d c3
Get hex-only output from objdump
1,426,771,187,000
I have the bash line: expr substr $SUPERBLOCK 64 8 Which is return to me string line: 00080000 I know that this is, actually, a 0x00080000 in little-endian. Is there a way to create integer-variable from it in bash in big-endian like 0x80000?
Probably a better way to do this but I've come up with this solution which converts the number to decimal and then back to hex (and manually adds the 0x): printf '0x%x\n' "$((16#00080000))" Which you could write as: printf '0x%x\n' "$((16#$(expr substr "$SUPERBLOCK" 64 8)))"
How to read string as hex number in bash?
1,426,771,187,000
I am dealing with an embedded system which has some memory that is accessible by a file descriptor (I have no idea what am I saying, so please correct me if I am wrong). This memory is 32 kB and I want to fill it with 0x00 to 0xFFFFFFFF. I know this for text files: exec {fh} >> ./eeprom; for i in {0..32767}; do echo $i >& $fh; done; $fh>&-; This will write ASCII characters 0 to 977. And if I do a hexdump eeprop | head I get: 0000000 0a30 0a31 0a32 0a33 0a34 0a35 0a36 0a37 0000010 0a38 0a39 3031 310a 0a31 3231 310a 0a33 0000020 3431 310a 0a35 3631 310a 0a37 3831 310a 0000030 0a39 3032 320a 0a31 3232 320a 0a33 3432 0000040 320a 0a35 3632 320a 0a37 3832 320a 0a39 0000050 3033 330a 0a31 3233 330a 0a33 3433 330a 0000060 0a35 3633 330a 0a37 3833 330a 0a39 3034 0000070 340a 0a31 3234 340a 0a33 3434 340a 0a35 0000080 3634 340a 0a37 3834 340a 0a39 3035 350a 0000090 0a31 3235 350a 0a33 3435 350a 0a35 3635 How can I fill each address with its uint32, not the ASCII representation?
perl -e 'print pack "L*", 0..0x7fff' > file Would write them in the local system's endianness. Use: perl -e 'print pack "L>*", 0..0x7fff' perl -e 'print pack "L<*", 0..0x7fff' To force big-endian or little-endian respectively regardless of the native endianness of the local system. See perldoc -f pack for details. With bash builtins specifically, you can write arbitrary byte values with: printf '\123' # 123 in octal printf '\xff' # ff in hexadecimal So you could do it by writing each byte of the uint32 numbers by hand with something like: for ((i = 0; i <= 32767; i++)); do printf -v format '\\x%x' \ "$(( i & 0xff ))" \ "$(( (i >> 8) & 0xff ))" \ "$(( (i >> 16) & 0xff ))" \ "$(( (i >> 24) & 0xff ))" printf "$format" done (here in little-endian). In any case, note that 32767 is 0x7fff, not 0xFFFFFFFF. uint32 numbers 0 to 32767 take up 128KiB, not 32kb. 0 to 0xFFFFFFFF would take up 16GiB. To write those 16GiB in perl, you'd need to change the code to: perl -e 'print pack "L", $_ for 0..0xffffffff' As otherwise it would try (and likely fail) to allocate those 16GiB in memory. On my system, I find perl writes the output at around 30MiB/s, while bash writes it at around 250KiB/s (so would take hours to complete). To write 32kb (32000 bits, 4000 bytes, 1000 uint32 numbers) worth of uint32 numbers, you'd use the 0..999 range. 0..8191 for 32KiB. Or you could write 0..16383 as uint16 numbers by replacing L (unsigned long) with S (unsigned short).
How to write binary values into a file in Bash instead of ASCII values
1,426,771,187,000
Here is the beginning of a file: # hexdump -n 550 myFile 0000000 f0f2 f5f0 f7f9 f1f1 f1f0 f0f0 e3f1 f3c8 0000010 f3f5 0000 0000 000c 0000 0000 0000 000c 0000020 0000 0c00 0000 0000 0000 0c00 0000 0000 0000030 000c 0000 0000 0000 000c 0000 0c00 0000 0000040 0000 0000 0c00 0000 0000 000c 0000 0000 0000050 0000 000c 0000 0c00 0000 0000 0000 0c00 0000060 0000 0000 000c 0000 0000 0000 000c 0000 * 00000b0 0000 0000 000c 0000 0000 0000 0000 0000 00000c0 0000 0000 0000 0c00 0000 0000 0000 0c00 00000d0 0000 0000 000c 0000 0000 0000 000c 0000 00000e0 0c00 0000 0000 0000 0c00 0000 0000 000c 00000f0 0000 0000 0000 000c 0000 0c00 0000 0000 0000100 0000 0c00 0000 0000 000c 0000 0000 0000 0000110 000c 0000 0c00 0000 0000 0000 0c00 0000 0000120 0000 0000 0c00 0000 0000 0000 0c00 0000 * 0000160 0000 0000 0c00 0000 0000 0000 0000 0000 0000170 0000 0000 0000 0000 000c 0000 0000 0000 0000180 000c 0000 0c00 0000 0000 0000 0c00 0000 0000190 0000 000c 0000 0000 0000 000c 0000 0c00 00001a0 0000 0000 0000 0c00 0000 0000 000c 0000 00001b0 0000 0000 000c 0000 0c00 0000 0000 0000 00001c0 0c00 0000 0000 000c 0000 0000 0000 000c 00001d0 0000 0000 0000 000c 0000 0000 0000 000c * 0000210 0000 0000 0000 000c 0000 0000 0000 0000 0000220 0000 0000 0a00 0000226 in which we can see the hex values 0c and 0a I don't understand why grep finds 0c but not 0a: # grep -P '\x0c' myFile Fichier binaire myFile correspondant # grep -P '\x0a' myFile <nothing in the output> I am using CentOS.
\x0a isn't just any hex value - it's the hex value corresponding to the ASCII linefeed character. Since grep is (by default) line-based, the linefeed characters are stripped out before pattern matching takes place. At least with GNU grep, you can change this behavior with the -z option: -z, --null-data Treat input and output data as sequences of lines, each terminated by a zero byte (the ASCII NUL character) instead of a newline. however note that this will strip out ASCII nulls, so that you will no longer be able to grep for those.
grep should find a hex value in a file but doesn't
1,426,771,187,000
I would like to switch between hostnames and hex IP addresses, and vice versa. I have installed syslinux-utils on Debian Stretch, which provides gethostip: gethostip -x google.com D83ACD2E How can I switch D83ACD2E back to hostname? In older version of Debian Wheezy, I can use the commands getaddrinfo' and 'getnameinfo # getaddrinfo google.com D83ACD4E # getnameinfo D83ACD4E mil04s25-in-f14.1e100.net I was unable to find these tools in Debian Stretch. Were these tools replaced by others?
You may be able to use glibc's getent here: $ getent ahostsv4 0xD83ACD2E | { read ip rest && getent hosts "$ip"; } 216.58.205.46 mil04s24-in-f46.1e100.net Another perl approach: $ perl -MSocket -le '($n)=gethostbyaddr(inet_aton("0xD83ACD2E"), AF_INET); print $n' mil04s24-in-f46.1e100.net
convert hex IP address into hostname
1,426,771,187,000
I see sometimes star symbol (*) in the hex editor, like ... 00001d0 0000 0000 0000 0000 0000 0000 0000 0000 * 00001f0 0000 0000 0000 0000 0008 0000 0000 0000 ... It is probably some sort of separator. However, there are many other separators too. What is the meaning of this star symbol in hex data?
It means that one or more lines were suppressed, because they are identical to the previous line; in this case, it means that the line starting at 00001e0 is all zeroes, same as that starting at 00001d0. To determine the number of deleted lines, you need to look at the addresses involved and the length of each line; in this case, a single line was deleted. If you're using od, this is controlled by the -v flag. By default od will suppress duplicate lines, -v tells it not to.
What is the meaning of Star symbol * in Hex data?
1,426,771,187,000
allHexChars.txt \x01\x02\x03\x04\x05\x06\x07\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f\x10\x11\x12\x13\x14\x15\x16\x17\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f\x20\x21\x22\x23\x24\x25\x26\x27\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f\x30\x31\x32\x33\x34\x35\x36\x37\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f\x40\x41\x42\x43\x44\x45\x46\x47\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f\x50\x51\x52\x53\x54\x55\x56\x57\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f\x60\x61\x62\x63\x64\x65\x66\x67\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f\x70\x71\x72\x73\x74\x75\x76\x77\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f\x80\x81\x82\x83\x84\x85\x86\x87\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f\x90\x91\x92\x93\x94\x95\x96\x97\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7\xa8\xa9\xaa\xab\xac\xad\xae\xaf\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7\xe8\xe9\xea\xeb\xec\xed\xee\xef\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff allowedChars.txt \x01\x02\x03\x04\x05\x06\x07\x08\x09\x31\x32\x33\x34\x35\x36\x37\x38\x39\x3b\x3c\x3d\x3e\x41\x42\x43\x44\x45\x46\x47\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f\x50\x51\x52\x53\x54\x55\x56\x57\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f\x60\x61\x62\x63\x64\x65\x66\x67\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f\x70\x71\x72\x73\x74\x75\x76\x77\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f How can I get this output? \x0a\x0b\x0c\x0d\x0e\x0f\x10\x11\x12\x13\x14\x15\x16\x17\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f\x20\x21\x22\x23\x24\x25\x26\x27\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f\x30\x80\x81\x82\x83\x84\x85\x86\x87\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f\x90\x91\x92\x93\x94\x95\x96\x97\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7\xa8\xa9\xaa\xab\xac\xad\xae\xaf\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7\xe8\xe9\xea\xeb\xec\xed\xee\xef\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff I've tried diff, vimdiff, sdiff, perl, awk, sed. I've tried echoing the contents of both files into one, and running the below: perl -ne 'print unless $seen{$_}++' everything.txt awk '!seen[$0]++' everything.txt But nothing seems to give me the output I need. Not sure if I'm just minterpreting, or if I need to specify the \x as a delimiter, or replace it with something else. All I want is the delta between the two files: the hex characters that are in allHexChars.txt that don't exist in allowedChars.txt. I don't mind how
sed -r 'H;$!d;x;s:\n::g;:l;s:(\\x..)(.*)\1:\2:;tl' allHexChars.txt allowedChars.txt > missingChars.txt The above GNU sed script assumes two things as I understood them from the question: inside the files no hex character is listed more than one time the first file contains all the hex characters from the second file To visualize the differences, use: diff -y <(fold -4 allHexChars.txt) <(fold -4 allowedChars.txt)
How can I list the different hex characters between two files?
1,426,771,187,000
I found a problematic sequence of a supposedly UTF-8 encoded text file. The strange thing is that grep seems unable to match this non-ASCII line. $ iconv -f utf8 -t iso88591 corrupt_part.txt --output corrupt_part.txt.conv iconv: illegal input sequence at position 8 $ cat corrupt_part.txt Oberallg�u $ grep -P -n '[^\x00-\x7F]' corrupt_part.txt $ od -h corrupt_part.txt 0000000 624f 7265 6c61 676c 75e4 0a20 0000014 So \xe4 is e.g. ä in the extended ASCII set. Yet, filtering on the control and printable characters (ascii range) the grep command above should match the \xe4 character. Why am I not getting any grep output?
e4 75 is indeed an illegal utf8 sequence. In utf8, a byte with the highest nibble equal to 0xe introduces a three byte sequence. The second byte of such a sequence cannot be 0x75, because the high order nibble of that second byte (0x7) is not between 0x8 and 0xb. This explains why iconv rejects that file as invalid utf8. Perhaps it's already iso8859-1? For a summary of utf8 encoding, consult this wikipedia table As for your grep issue, perhaps if you specify the C/POSIX locale, where characters are equivalent to bytes: LC_ALL=C grep -P -n '[^\x00-\x7F]' corrupt_part.txt Using an old Ubuntu system, GNU grep, and an environment using the en_US.UTF-8 locale: $ od -h bytes 0000000 624f 7265 6c61 676c 75e4 0a20 0000014 $ grep -P '[^\x00-\x7F]' bytes | od -h 0000000 624f 7265 6c61 676c 75e4 0a20 0000014 $ LC_ALL=C grep -P '[^\x00-\x7F]' bytes | od -h 0000000 624f 7265 6c61 676c 75e4 0a20 0000014
Grep is not matching non-ascii characters
1,426,771,187,000
I am using the following grep script to output all the unmatched patterns: grep -oFf patterns.txt large_strings.txt | grep -vFf - patterns.txt > unmatched_patterns.txt patterns file contains the following 12-characters long substrings (some instances are shown below): 6b6c665d4f44 8b715a5d5f5f 26364d605243 717c8a919aa2 large_strings file contains extremely long strings of around 20-100 million characters longs (a small piece of the string is shown below): 121b1f212222212123242223252b36434f5655545351504f4e4e5056616d777d80817d7c7b7a7a7b7c7d7f8997a0a2a2a3a5a5a6a6a6a6a6a7a7babbbcbebebdbcbcbdbdbdbdbcbcbcbcc2c2c2c2c2c2c2c2c4c4c4c3c3c3c2c2c3c3c3c3c3c3c3c3c2c2c1c0bfbfbebdbebebebfbfc0c0c0bfbfbfbebebdbdbdbcbbbbbababbbbbcbdbdbdbebebfbfbfbebdbcbbbbbbbbbcbcbcbcbcbcbcbcbcb8b8b8b7b7b6b6b6b8b8b9babbbbbcbcbbbabab9b9bababbbcbcbcbbbbbababab9b8b7b6b6b6b6b7b7b7b7b7b7b7b7b7b7b6b6b5b5b6b6b7b7b7b7b8b8b9b9b9b9b9b8b7b7b6b5b5b5b5b5b4b4b3b3b3b6b5b4b4b5b7b8babdbebfc1c1c0bfbec1c2c2c2c2c1c0bfbfbebebebebfc0c1c0c0c0bfbfbebebebebebebebebebebebebebdbcbbbbbab9babbbbbcbcbdbdbdbcbcbbbbbbbbbbbabab9b7b6b5b4b4b4b4b3b1aeaca9a7a6a9a9a9aaabacaeafafafafafafafafafb1b2b2b2b2b1b0afacaaa8a7a5a19d9995939191929292919292939291908f8e8e8d8c8b8a8a8a8a878787868482807f7d7c7975716d6b6967676665646261615f5f5e5d5b5a595957575554525 How can we speed up the above script (gnu parallel, xargs, fgrep, etc.)? I tried using --pipepart and --block but it doesn't allow you to pipe two grep commands. Btw these are all hexadecimal strings and patterns.
A much more efficient answer that does not use grep: build_k_mers() { k="$1" slot="$2" perl -ne 'for $n (0..(length $_)-'"$k"') { $prefix = substr($_,$n,2); $fh{$prefix} or open $fh{$prefix}, ">>", "tmp/kmer.$prefix.'"$slot"'"; $fh = $fh{$prefix}; print $fh substr($_,$n,'"$k"'),"\n" }' } export -f build_k_mers rm -rf tmp mkdir tmp export LC_ALL=C # search strings must be sorted for comm parsort patterns.txt | awk '{print >>"tmp/patterns."substr($1,1,2)}' & # make shorter lines: Insert \n(last 12 char before \n) for every 32k # This makes it easier for --pipepart to find a newline # It will not change the kmers generated perl -pe 's/(.{32000})(.{12})/$1$2\n$2/g' large_strings.txt > large_lines.txt # Build 12-mers parallel --pipepart --block -1 -a large_lines.txt 'build_k_mers 12 {%}' # -j10 and 20s may be adjusted depending on hardware parallel -j10 --delay 20s 'parsort -u tmp/kmer.{}.* > tmp/kmer.{}; rm tmp/kmer.{}.*' ::: `perl -e 'map { printf "%02x ",$_ } 0..255'` wait parallel comm -23 {} {=s/patterns./kmer./=} ::: tmp/patterns.?? I have tested this on a full job (patterns.txt: 9GBytes/725937231 lines, large_strings.txt: 19GBytes/184 lines) and on my 64-core machine it completes in 3 hours.
Boosting the grep search using GNU parallel
1,426,771,187,000
I have a script which process some information coming from a web page. I guess that because of the encoding of the page, some special characters are encoded in hexadecimal. For example, I have the the string "%2f" that should be translated to "/". How can I, in bash, translate those special characters in hex to ASCII? Any ideas?
Bash has a printf builtin, which can around the same as we could learn in C. The syntax a little bit differs. printf '\x2f' If you don't need to worry about higher-level data consistency problems, you can simply convert an url by this function: function deUrl() { printf "${1//%/\\x}" } (It converts every % to a \x, then prints it with printf.)
How to convert an special hex character from an html page in bash? [duplicate]
1,426,771,187,000
od (octal dump) command is implemented in Unix since Version 1. However, I couldn't find a reverse command in the Version 6 Unix manual (1975). Modern ways of reversing od with either xxd or sed do not apply to Unix V6, since sed and awk appeared only in Version 7 (while uudecode, xxd and base64 are not available neither). The questions are: How did people in 1975 convert octal or hex listings into binary files? How can I do this in a PDP-11 emulator that only allows me to paste text into terminal from clipboard? (In other words, is it necessary to write my own program for that?) Here is a sample hex dump of a binary file by od that I'd like to convert back into binary: # od -h /bin/sum 0000000 0107 00ba 0000 0204 0000 0000 0000 0001 0000020 1583 0bd6 0ac3 0601 8901 0a37 00ae 15b7 0000040 0002 8905 0000 0000 8609 15c0 0001 8904 0000060 0026 0005 01ef 706f 6472 000a 1001 0a05 0000100 1040 8903 00be 0200 870b 0bc0 030e 0ab7 0000120 007a 15c2 00be 9484 6105 0b45 7e04 01f0 0000140 15c0 0001 8904 0080 0002 09f7 0026 15c0 0000160 0001 8904 0083 0001 1dc5 0050 09f7 0014 0000200 15c0 0001 8904 0082 0001 1040 8906 01c2 0000220 203f 200a 1026 1066 1141 09f7 0006 1581 0000240 1580 0087 0a00 7217 000a 1066 1001 0302 0000260 09f7 fff0 15b7 0012 65f7 0030 000c 15c0 0000300 0001 8904 00ba 0001 0087 0000312
Looks like Version 6 Unix didn't include many common tools yet that appeared only in Version 7 (like sed and awk). At that point, Unix was also not commercialized yet, so "reverse hex dump" could be missing simply because there was no wide demand for that operation or because Ken (or some other programmer) provided such tool from their unofficial /usr directory at Bell Labs. Who knows. But here is my implementation for reversing a hex dump on Unix V6. When compiled like cc -s -O unhex.c, the resulting executable is only 1160 bytes and works faster than dumping (as expected). Notice how C language of that time still had compound assignment syntax of B (=+, =*, etc.) and how it expected programmers to provide their own buffers for file I/O. /* reverse "od -h" operation on Unix V6 */ /* written in pre-K&R C */ /* derived from wc.c and cvopt.c */ int ibuf[259]; int obuf[259]; main(argc,argv) char **argv; { int token, bytecnt; register char *p1, *p2; /* input buffer pointers */ register int c; /* char or read count */ char sp, b1, b2, lastc, lastb2, nfirst; obuf[0] = 1; /* standard output by default */ if (argc>2) { /* create output file */ if ((obuf[0] = creat(argv[2], 0666)) < 0) { diag(argv[2]); diag(": failed to create\n"); return; } } if (argc>1 && fopen(argv[1], ibuf)>=0) { p1 = 0; p2 = 0; sp = 0; token = 0; bytecnt = 0; nfirst = 0; for(;;) { /* reading from file */ if (p1 >= p2) { p1 = &ibuf[1]; c = read(ibuf[0], p1, 512); if (c <= 0) break; p2 = p1+c; } /* decoding loop */ c = 0; c =| *p1++; if (c==' ' || c=='\n') { b1 = token; b2 = token >> 8; if (lastc!=' ' && lastc!='\n') { /* end of token */ if (sp>0) { if (nfirst) putc(lastb2, obuf); putc(b1, obuf); lastb2 = b2; nfirst = 1; } else { /* first token in the line */ bytecnt = token; } } if (c==' ') sp++; else { /* new line */ sp = 0; fflush(obuf); } token = 0; } else { /* actual hex and octal conversion */ token =* sp>0 ? 16 : 8; token =+ c<='9' ? c-'0' : c-'W'; } lastc = c; } if (!(bytecnt & 1)) { putc(lastb2, obuf); fflush(obuf); } close(ibuf[0]); close(obuf[0]); } else if (argc>1) { diag(argv[1]); diag(": cannot open\n"); } else { diag("error: filename missing\n"); } } diag(s) char *s; { while(*s) write(2,s++,1); } UPD. I published a faster and simpler version on GitHub, where the syntax is also highlighted.
Undump od (octal or hex dump) in Version 6 Unix
1,426,771,187,000
The hex string 0068732f6e69622f represents the ASCII string /bin/sh, when it's stored in memory in LE-format. Is there any Linux utiltity that will take the hex string and reverse it bytes (2f62696e2f736800), such that xxd -r -ps will display /bin/sh? $ echo -n 0068732f6e69622f | xxd -r -ps hs/nib/ I've looked into xxd -e, but it's not possible to use it with -r: -e little-endian dump (incompatible with -ps,-i,-r).
$ echo 0068732f6e69622f | rev | dd conv=swab 2>/dev/null | xxd -r -p /bin/sh rev reverses the input string: 0068732f6e69622f -> f22696e6f2378600 dd conv=swab 2>/dev/null swaps every pair of bytes and discards dd's noisy output on stderr: f2 -> 2f, 26 -> 62, ...
How to modify a hex string to LE-format before passing it to `xxd -r` to view its binary contents?
1,426,771,187,000
I have a Java class which the compiler refuses to compile due to \ufeff at the start of the file. I can view the fact that the BOM is present by vim -b file.java, but neither xxd nor hexdump show the two bytes. Is there some way to make them do so?
The U+FEFF character is encoded in UTF-8 over 3 bytes: ef bb bf. xxd or hexdump shows you the byte content, so those 3 bytes, not the character that those 3 bytes encode like vim -b does. To remove that BOM (which doesn't make sense in UTF-8) and fix other idiosyncrasies of Microsoft text files (which is likely the source of your problem), you can use dos2unix. $ printf '\ufefffoobar\r\n' | hd 00000000 ef bb bf 66 6f 6f 62 61 72 0d 0a |...foobar..| 0000000b $ printf '\ufefffoobar\r\n' | uconv -x name \N{ZERO WIDTH NO-BREAK SPACE}\N{LATIN SMALL LETTER F}\N{LATIN SMALL LETTER O}\N{LATIN SMALL LETTER O}\N{LATIN SMALL LETTER B}\N{LATIN SMALL LETTER A}\N{LATIN SMALL LETTER R}\N{<control-000D>}\N{<control-000A>} $ printf '\ufefffoobar\r\n' | uconv -x hex \uFEFF\u0066\u006F\u006F\u0062\u0061\u0072\u000D\u000A $ printf '\ufefffoobar\r\n' | dos2unix | hd 00000000 66 6f 6f 62 61 72 0a |foobar.| 00000007
Why does xxd not show the byte order mark?
1,426,771,187,000
If I have, say: blah;PC=1234abcd PC=4444bbcd;blah PC=0000abcd;;foo PC=1234abff How do I grep for lines with PC values in a given range, say 1234ab00 to 1234b0ff. The - range option seems to only apply to the regular 0-9a-A order which obviously won't work for hexadecimal ranges.
grep -f <(printf "%x\n" $(seq -f "%.f" $(printf "%d %d" 0x1234ab00 0x1234b0ff))) file The inner printf prints decimal values of the two hex values. Then seq prints all between them, in decimal. The outer printf prints hex values for all those decimal values. And finally grep -f searches for all those patterns in the file. The output: blah;PC=1234abcd PC=1234abff
How to use grep to return lines with an hexadecimal number in a given range?
1,426,771,187,000
So Im using emacs which has a stupendous hexl-mode to view the byte offset in a file right over the hex values similar to: 87654321 0011 2233 4455 6677 8899 aabb ccdd eeff 0123456789abcdeff 00000000: 5765 6c63 6f6d 6520 746f 2047 4e55 2045 Welcome to GNU E As a fan of this capability. Wondering if this is a capability I can pull out of in the xxd or hexdump? Or if anybody has an awk script that does this and keeps it lined up properly
My favourite use of hexdump is in this format: hexdump -v -e '"%08_ax "' -e '16/1 "%02X "" "" "' -e '16/1 "%_p""\n"' That gives output similar to % echo hello there everyone | hexdump -v -e '"%08_ax "' -e '16/1 "%02X "" "" "' -e '16/1 "%_p""\n"' 00000000 68 65 6C 6C 6F 20 74 68 65 72 65 20 65 76 65 72 hello there ever 00000010 79 6F 6E 65 0A yone. It would be easy to simply put an echo in front of this: echo hello there everyone | (echo '87654321 00 11 22 33 44 55 66 77 88 99 aa bb cc dd ee ff 0123456789abcdef' ; hexdump -v -e '"%08_ax "' -e '16/1 "%02X "" "" "' -e '16/1 "%_p""\n"') 87654321 00 11 22 33 44 55 66 77 88 99 aa bb cc dd ee ff 0123456789abcdef 00000000 68 65 6C 6C 6F 20 74 68 65 72 65 20 65 76 65 72 hello there ever 00000010 79 6F 6E 65 0A Alternatively we could "page" the output; eg put the header every 16 lines, with a simple awk filter: cat x | hexdump -v -e '"%08_ax "' -e '16/1 "%02X "" "" "' -e '16/1 "%_p""\n"' | awk '(NR-1)%16 == 0 { print "\n87654321 00 11 22 33 44 55 66 77 88 99 aa bb cc dd ee ff 0123456789abcdef"} ; { print }' | less 87654321 00 11 22 33 44 55 66 77 88 99 aa bb cc dd ee ff 0123456789abcdef 00000000 54 68 69 73 20 69 73 20 6C 69 6E 65 20 31 0A 54 This is line 1.T 00000010 68 69 73 20 69 73 20 6C 69 6E 65 20 32 0A 54 68 his is line 2.Th 00000020 69 73 20 69 73 20 6C 69 6E 65 20 33 0A 54 68 69 is is line 3.Thi 00000030 73 20 69 73 20 6C 69 6E 65 20 34 0A 54 68 69 73 s is line 4.This 00000040 20 69 73 20 6C 69 6E 65 20 35 0A 54 68 69 73 20 is line 5.This 00000050 69 73 20 6C 69 6E 65 20 36 0A 54 68 69 73 20 69 is line 6.This i 00000060 73 20 6C 69 6E 65 20 37 0A 54 68 69 73 20 69 73 s line 7.This is 00000070 20 6C 69 6E 65 20 38 0A 54 68 69 73 20 69 73 20 line 8.This is 00000080 6C 69 6E 65 20 39 0A 54 68 69 73 20 69 73 20 6C line 9.This is l 00000090 69 6E 65 20 31 30 0A 54 68 69 73 20 69 73 20 6C ine 10.This is l 000000a0 69 6E 65 20 31 31 0A 54 68 69 73 20 69 73 20 6C ine 11.This is l 000000b0 69 6E 65 20 31 32 0A 54 68 69 73 20 69 73 20 6C ine 12.This is l 000000c0 69 6E 65 20 31 33 0A 54 68 69 73 20 69 73 20 6C ine 13.This is l 000000d0 69 6E 65 20 31 34 0A 54 68 69 73 20 69 73 20 6C ine 14.This is l 000000e0 69 6E 65 20 31 35 0A 54 68 69 73 20 69 73 20 6C ine 15.This is l 000000f0 69 6E 65 20 31 36 0A 54 68 69 73 20 69 73 20 6C ine 16.This is l 87654321 00 11 22 33 44 55 66 77 88 99 aa bb cc dd ee ff 0123456789abcdef 00000100 69 6E 65 20 31 37 0A 54 68 69 73 20 69 73 20 6C ine 17.This is l 00000110 69 6E 65 20 31 38 0A 54 68 69 73 20 69 73 20 6C ine 18.This is l I might want to put some separators in there to make it easier to distinguish between the "header" and the content. This is easily made into a script: % cat hex #!/bin/sh hexdump -v -e '"%08_ax "' -e '16/1 "%02X "" "" "' -e '16/1 "%_p""\n"' | awk '(NR-1)%16 == 0 { print "\n87654321 00 11 22 33 44 55 66 77 88 99 aa bb cc dd ee ff 0123456789abcdef\n======== == == == == == == == == == == == == == == == == ================"} ; { print }' Now you can do % hex < x or % cat x | hex And similar commands. 87654321 00 11 22 33 44 55 66 77 88 99 aa bb cc dd ee ff 0123456789abcdef ======== == == == == == == == == == == == == == == == == ================ 00000000 54 68 69 73 20 69 73 20 6C 69 6E 65 20 31 0A 54 This is line 1.T 00000010 68 69 73 20 69 73 20 6C 69 6E 65 20 32 0A 54 68 his is line 2.Th 00000020 69 73 20 69 73 20 6C 69 6E 65 20 33 0A 54 68 69 is is line 3.Thi 00000030 73 20 69 73 20 6C 69 6E 65 20 34 0A 54 68 69 73 s is line 4.This 00000040 20 69 73 20 6C 69 6E 65 20 35 0A 54 68 69 73 20 is line 5.This 00000050 69 73 20 6C 69 6E 65 20 36 0A 54 68 69 73 20 69 is line 6.This i 00000060 73 20 6C 69 6E 65 20 37 0A 54 68 69 73 20 69 73 s line 7.This is 00000070 20 6C 69 6E 65 20 38 0A 54 68 69 73 20 69 73 20 line 8.This is 00000080 6C 69 6E 65 20 39 0A 54 68 69 73 20 69 73 20 6C line 9.This is l 00000090 69 6E 65 20 31 30 0A 54 68 69 73 20 69 73 20 6C ine 10.This is l 000000a0 69 6E 65 20 31 31 0A 54 68 69 73 20 69 73 20 6C ine 11.This is l 000000b0 69 6E 65 20 31 32 0A 54 68 69 73 20 69 73 20 6C ine 12.This is l 000000c0 69 6E 65 20 31 33 0A 54 68 69 73 20 69 73 20 6C ine 13.This is l 000000d0 69 6E 65 20 31 34 0A 54 68 69 73 20 69 73 20 6C ine 14.This is l 000000e0 69 6E 65 20 31 35 0A 54 68 69 73 20 69 73 20 6C ine 15.This is l 000000f0 69 6E 65 20 31 36 0A 54 68 69 73 20 69 73 20 6C ine 16.This is l 87654321 00 11 22 33 44 55 66 77 88 99 aa bb cc dd ee ff 0123456789abcdef ======== == == == == == == == == == == == == == == == == ================ 00000100 69 6E 65 20 31 37 0A 54 68 69 73 20 69 73 20 6C ine 17.This is l
Make xxd display the the byte offset at the top column?
1,426,771,187,000
I am using hexedit to show/edit disk MBR (512 Bytes, copied with dd). When I open the file, hexedit displays the file as 9 columns, 4 bytes per column (36 bytes per line). That is very unfortunate. I need to have it aligned in a meaningful way (ie 8 columns, 32 columns per line) I could not find any way to do it in the manual page. Is there a trick I could use ? UPDATE: here are the commands I use: dd if=/dev/sda of=sda.img bs=512 count=1 hexedit sda.img regarding the output I get, it looks similar to slm's, only with 9 columns instead of 8.
Apparently it keys off of the width of your terminal. If you size the terminal just right you can get hexedit to show you 8 columns instead of 9. Example 00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................................ 00000020 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................................ 00000040 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................................ 00000060 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................................ 00000080 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................................ 000000A0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................................ 000000C0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................................ 000000E0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................................ 00000100 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................................ 00000120 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................................ 00000140 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................................ 00000160 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................................ 00000180 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................................ 000001A0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................................ 000001C0 01 00 EE FE FF FF 01 00 00 00 AF 32 CF 1D 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ...........2.................... 000001E0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 55 AA ..............................U. 00000200 00000220 00000240 00000260 00000280 I had the column width of the above terminal set to 151x55. $ resize COLUMNS=151; LINES=55; export COLUMNS LINES;
hexedit: change number of columns (bytes per line)
1,426,771,187,000
I'm trying to do the following: ch='\x21' line="\x21" len=50 for i in `seq 1 $len` do line+="$ch" done Instead of 50 '!' (hex code \x21) I get a list of 50 '\x21'. How can I do this in bash?
Per the man page, "Words of the form $'string' are treated specially". Thus, adding $'' to the mix may help: % bash bash-3.2$ ch=$'\x21'; echo $ch$ch$ch !!! bash-3.2$
Bash script: hex
1,426,771,187,000
I am creating a password generator however I am not get the passwords to append properly. Here is my script i="0" while [ $i -lt 5 ] do echo -n '#' >> passwords.txt && openssl rand -hex 4 >> passwords.txt && echo -n '/' >> passwords.txt && echo -n 'X' >> passwords.txt i=$[$i+1] done Output #b887e0d0 /X#7093289e /X#2210cfcd /X#fd175e1f /X#0c18fc9e /X Expected Output #b887e0d0/Z #7093289e/Z #2210cfcd/Z #fd175e1f/Z #0c18fc9e/Z How can I make it so that it doesn't skip that first line and also so that it runs each command in that order and will write to the passwords txt file as the expected output. Thanks.
This should do: for i in {1..5}; do printf '#%s/Z\n' "$(openssl rand -hex 4)" done >passwords.txt I replaced the multiple calls to echo with a single call to printf. Having the call to openssl wrapped inside a command substitution has the side effect of making the line ending disappear, and that newline character was the cause of the badly-placed line breaks visible in your example.
Appending Echo [Password Generator]
1,426,771,187,000
Assume the data consists of byte offset which is not fixed i.e. the distance of two subsequent file headers varies. The point of this thread is to go through each size of events separately in arrays. Example data fafafafa 00005e58 da1e5afe 00000000 * fafafafa 00005e58 da1e5afe 00000000 * 00000001 ffffffff 555eea72 00000000 * 00000004 fafafafa 01da1300 * 00000004 02991c00 fafafafa 01da1300 fafafafa 01da1300 fafafafa 01da1300 where the field deliminator is fafafafa. My proposal #!/bin/bash # http://stackoverflow.com/a/10383546/54964 # http://unix.stackexchange.com/a/209789/16920 myarr = ($( cat 25.6.2015_test.txt | awk -F 'fafafafa' '$1~/^[a-z0-9*]+$/ {print $1}') ) # http://stackoverflow.com/a/15105237/54964 # Now access elements of an array (change "1" to whatever you want) echo ${myarr[1]} # Or loop through every element in the array for i in "${myarr[@]}" do : echo $i done Script run as a whole Output awk2array.sh: line 5: syntax error near unexpected token `(' awk2array.sh: line 5: `myarr = ($( cat 25.6.2015_test.txt | awk -F 'fafafafa' '$1~/^[a-z0-9*]+$/ {print $1}') ) ' which I do not understand, since even brackets. I would like to get the output into an array or store each event into a file named arithmetically (0.txt, 1.text, ..., n.txt). I now describe some of the commands separately and some parts of the codes about which I am uncertain. AWK command run separately The AWK command when run separately omits the field deliminator, giving 00005e58 da1e5afe 00000000 * 00005e58 da1e5afe 00000000 * 00000001 ffffffff 555eea72 00000000 * 00000004 01da1300 * 00000004 02991c00 01da1300 01da1300 01da1300 Wanted output is to have all data in array where the field separator is fafafafa such that fafafafa should be included in the cell, for instance Value of first cell in array ---------------------------- fafafafa 00005e58 da1e5afe 00000000 * Value of second cell -------------------- fafafafa 00005e58 da1e5afe 00000000 * 00000001 ffffffff 555eea72 00000000 * 00000004 3rd cell -------- 01da1300 * 00000004 02991c00 4th cell -------- fafafafa 01da1300 5th cell -------- fafafafa 01da1300 6th cell -------- fafafafa 01da1300 How can you store big data into N array by AWK? You can also store each event into file after reading it without starting to read the file again and continuing from the point where left.
Problem So many things wrong here #!/bin/bash myarr = ( has got a space between it meaning nothing is assigned if it even runs at all. cat 25.6.2015_test.txt | awk Awk can open its own files no need for cat -F 'fafafafa' '$1~/^[a-z0-9*]+$/ -F is the field separator not record, so all this is doing is removing the text fafafafa, it's still reading each line as a record so your next condition is entirely pointless. myarr = ($( cat 25.6.2015_test.txt | awk -F 'fafafafa' '$1~/^[a-z0-9*]+$/ {print $1}') ) This will print multiple lines which will all be separate elements in the array as they are split on newlines and have no visibility of what is a record in awk(if you had actually split on records instead of fields). echo ${myarr[1]} echo $i Quote these unless you want to see all the files in your directory everytime you echo (due to the * in the records) : Why ? Solution # Create an array myarr=() # Save the number of different blocks to be saved, notice the # `-vRS` which sets the field separator blocks=$(awk -vRS='fafafafa' '$1~/^[a-z0-9*]+$/{x++}END{print x}' file) # While the the counter is less than the number of blocks. while [[ $x -le $blocks ]] ;do # Increase the counter ((x++)) # Add the value for that block to the array, notice the quotes around # `$()`, they are important in keeping all the block as one array # element. The awk also increments its own counter for each # occurrence of 'fafafafa' and your condition for '$1'. When both # counters match the block is saved to the array. myarr+=("$(awk -vRS='fafafafa' -vN="$x" '$1~/^[a-z0-9*]+$/{x++} x==N{print RS$0}' test)") done
Put big data of heterogenous byte offset into arrays by AWK
1,426,771,187,000
So I am often guilty of running cat on an executable file that's a binary file and my terminal usually makes some weird noises and isn't happy. Is there some accepted naming convention for giving an an extension to binary/executable encoded file? I have an executable file (the output of go build -o /tmp/api.exe . and I as I just mentioned I just named it .exe but I am wondering if there is way to check a file before I cat it to see if it's utf8 or whateer.
The standard naming practice for executables is to give them the name of the command they’re supposed to implement: ls, cat... There is no provision for extensions which end up ignored from the command line. To check what a file contains before feeding it to cat, run file on it: $ file /bin/ls /bin/ls: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, BuildID[sha1]=b6b1291d0cead046ed0fa5734037fa87a579adee, for GNU/Linux 3.2.0, stripped, too many notes (256) $ file /bin/zgrep /bin/zgrep: a /usr/bin/sh script, ASCII text executable This tells me that cat /bin/zgrep won’t do anything strange to my terminal (it doesn’t even contain escape sequences, which are identified separately by file). I much prefer using less in general: it will warn about binary files before showing them, and won’t mess up the terminal in any case. It can also be configured to behave like cat for short files (see the -F option). As mosvy points out, you can make cat safe to use on binaries by adding the -v option, which replaces non-printable characters with visible representations (^ and M- prefixes). (Rob Pike famously considered that this option is harmful — not because of its effects on the terminal, but because of its effect on command style.)
Standard naming practice for executables (binary file) and how to tell whether a file has has non-printable characters?
1,426,771,187,000
I can find seemingly every variation of hex manipulation by printf except this one. I am trying to send html hex colour values to a text file, built mostly using printf. I can calculate the separate R, G and B values but they normally print in decimal (range 0-255). How can I print them out, a) in two-digit hexadecimal and b) concatenated as a six-digit string? For example variable values $R=254, $G=127, $B=0 should print as FE7F00.
You can use printf's %x (lowercase) or %X (uppercase) for this, forcing the width to 2 characters: #!/bin/sh r=254 g=127 b=0 printf '%02X%02X%02X\n' "$r" "$g" "$b" The result looks like so: FE7F00
How to print variable value as a hex number?
1,426,771,187,000
I'm trying to find the offset of a hex pattern in a file. This works for one specific value: $ grep -obUaP -m1 "\x00\x50\x53\x46\x01\x01\x00\x00\x34\x01\x00\x00" file.bin 3088:PSF4 However, this pattern includes a few bytes that will change, so I need to include wildcards in my grep. I can't figure out how to do that. Here's everything I've tried so far: \x.., \x., .., and every similar form I can think of does not match \x[0-9][0-9] does not match \x.* does not match just .* (ie., \x00.*\x01) does match, but it's greedy and matches more than the pattern Probably overlooking something silly, but I'm running into a wall here. How do you specify a wildcard in hex, or at least when using grep with perl-regex to search for hex?
grep -P '\xAB' doesn't look for a hex character. There is no such thing as a hex character. \xAB is PCRE syntax to match a character whose codepoint value expressed in hexadecimal is 0xAB (171 in decimal). codepoint here would be the Unicode codepoint in locales that use UTF-8 and byte value in locales that use a single byte charset (GNU grep -P doesn't support multibyte charsets other than UTF-8). So \xAB would match on the U+00AB character («) in a UTF-8 locale (where that character is encoded on 2 bytes: 0xc2 and 0xab) and the 0xAB byte in single-bytes locales (for instance, which represents the Ћ in a locale using the iso8859-5 charset). If you want to match on byte value, you should make sure the locale uses a single-byte charset, the C locale is probably your best bet. LC_ALL=C grep -P '\xAB' matches on the 0xAB (171) byte, regardless of what character if any it represents in any charset. To match on any single byte, again, you can use . (assuming the C locale or any local with single byte per character charset). To match on a byte value within a range, as @Angle115 already said: [\x01-\x45] (here for byte values 1 to 0x45 / 69) But bear in mind that grep matches on the contents of text lines¹, so it will never find the newline character which is the line delimiter and, regardless of the locale always has value 0x0A² (10 in decimal). So LC_ALL=C grep -P '\x23.\xab' would match on a sequence of 3 bytes, the first one with value 0x23, the second with any value except 0xA and the third one with value 0xAB. To be able to search for bytes with arbitrary values including 0xA, you'd need to treat the whole input as a whole, not one line or nul-delimited record at a time like grep does. For that, you could use pcregrep with its -M (multiline) option along with the (?s) flag (for newline not to be treated specially by .) or use perl with its slurp-mode: LC_ALL=C pcregrep --file-offsets -Ma '(?s)\x23.\xab' < file (pcregrep doesn't have a -b option, --file-offsets which prints offset and length is probably the closest). perl -l -0777 -ne 'print "$-[0]:$_" while /\x23.\xab/gs' < file Or: perl -l -0777 -ne 'print $-[0] if /\x23.\xab/s' < file To only print the byte offset of the first match. perl loads the whole file in memory, pcregrep doesn't but has internal limits that would likely prevent you from processing files where 0xA bytes are far apart. ¹ or NUL-delimited records with --null/-z ¹ on ASCII based systems. I don't even know if libpcre was ever ported to EBCDIC systems, I doubt many people will ever come across some of those these days.
How to grep for hex pattern w/ wildcards?
1,426,771,187,000
Debian jess 64 I was wondering if it is possible to view the binary of a file in the 00101000 form and edit it, I am able to view it but in the hex form and I am looking to view and edit it in a the 8 digit form, I have been able to view it in the correct form just not edit it so I believe it is possible, So moral of the story trying to view the binary in the 8 digit form and edit it rather then the hex.
with xxd, you can use the -b flag echo 'hello world' | xxd -b which will output 0000000: 01101000 01100101 01101100 01101100 01101111 00100000 hello 0000006: 01110111 01101111 01110010 01101100 01100100 00001010 world. you can redirect that to a file where you can edit it echo 'hello world' | xxd -b > dumped_bits.txt and then, leaving the coumns in place you can convert back with this (albiet hacky) script #!/bin/bash # name this file something like `bits_to_binary.sh` # strip anything that's not a bit string like `0000000:` or `world` bits=`sed -ze 's/\w*[^ 01]\w*//g' -e 's/ //g' -e 's/\n//' $1` # and convert the bit representation to binary printf "obase=16;ibase=2;${bits}\n" | bc | xxd -r -p and with those steps in combination, you can echo 'hello world' | xxd -b > dumped_bits.txt # edit dumped_bits.txt ./bits_to_binary.sh dumped_bits.txt # hooray! the binary output from the edited bits!
Viewing Binary not Hex
1,426,771,187,000
Is it possible to Modify and Replace $1 (awk) or \1 (sed) Values from Decimal to Hexadecimal Globally in a String? It is possible that the string may contain any decimal value, which needs to be modified and replaced with its hexadecimal equivalent. awk example: echo "&#047;Test&#045;Test&#045;Test&#045;Test&#045;Test&#047;Test&#047;Test&#047;" | awk '{gsub("&#([0-9]+);", $1, $0); print}' sed example: echo "&#047;Test&#045;Test&#045;Test&#045;Test&#045;Test&#047;Test&#047;Test&#047;" | sed -E 's/&#([0-9]+);/$(printf "%X" \1)/g;' echo "&#047;Test&#045;Test&#045;Test&#045;Test&#045;Test&#047;Test&#047;Test&#047;" | sed -E 's/&#([0-9]+);/$(echo "obase=16; \1" | bc)/g;' I've attempted to subexec and pipe with printf "%X" and bc, but have been unable to combine the two for the resulting decimal to hexadecimal modification and replacement. expected output: %2FTest%2DTest%2DTest%2DTest%2DTest%2FTest%2FTest%2F Your assistance is greatly appreciated.
With GNU awk, where the Record Separator can be a regexp, and what it matches is stored in RT: gawk -v RS='&#[0-9]+;' -v ORS= '1;RT{printf("%%%02X", substr(RT,3))}' Personally, I'd use perl instead: perl -pe 's{&#(\d+);}{sprintf "%%%02X", $1}ge' See also: perl -MURI::Escape -MHTML::Entities -lpe '$_ = uri_escape decode_entities $_' Which here gives: %2FTest-Test-Test-Test-Test%2FTest%2FTest%2F As the hyphen doesn't need to be encoded in a URI. It would also take care of converting % to %25, space to %20, &amp; to %26 and much more. There's also the question of what to do with non-ASCII characters (characters above &#127;)? If they should be converted to the URI encoding of their UTF-8 encoding, for instance for &#8364; (€, U+20AC, &euro;) to be converted to %E2%82%AC (the 3 bytes of the UTF-8 encoding of that character), then that should rather be: perl -MURI::Escape -MHTML::Entities -lpe '$_ = uri_escape_utf8 decode_entities $_' With uri_escape, you'd get the ISO8859-1 (aka latin1) encoding which in this day and age is unlikely to be what you want (and be limited to characters up to &#255;). The other solutions would convert &#8364; to %20AC for instance which is definitely wrong.
Modify and Replace $1 (awk) or \1 (sed) Values from Decimal to Hexadecimal Globally in a String?
1,426,771,187,000
I'm trying to match exit codes of a process that is documented to return hexadecimal exit codes (e.g. 0x00 for success, 0x40 - 0x4F on user error, 0x50 - 0x5F on internal error, etc.). I'd like to handle the exit code via a case statement, but the "obvious" solution doesn't match: $ $val = 10 $ case $val in > 0xA) echo match;; > *) echo no match;; > esac no match Is there a readable way to match hexadecimal values in a case statement?
Yes, the double-parentheses arithmetic operator will display hex values as decimal, letting case match them. $ echo $((0xA)) 10 $ case $val in > $((0xA))) echo match;; > *) echo no match;; > esac match
Match hexadecimal values in a case statement
1,426,771,187,000
In bash I can call PHP and run the following: testKey='8798(*&98}9%"^8&]8_98{9798**76876' testHex=$(php -r "echo bin2hex('$testKey');") echo $testHex And that will result in 38373938282a2639387d3925225e38265d385f39387b393739382a2a3736383736 I've got a system where PHP isn't available, is there anyway to get the same result using just bash ? Thanks
If you have hexdump lying around: $ printf "%s" "$testKey" | hexdump -ve '/1 "%x"' 38373938282a2639387d3925225e38265d385f39387b393739382a2a3736383736 -e sets a format string for hexdump, which 'must be surrounded by double quote ( " ) marks'. /1 uses one byte at a time for the format string %x, which prints it in hex (otherwise the byte order could result in different output). -v is to make it print repeated data as well (the default is to replace them with a *).
BASH binary to Hex to match PHP bin2hex function?
1,513,762,571,000
When inserting a USB stick or device to computer, there is always the risk that the device is malicious, will act as an HID and potentially do some damage on the computer. How can I prevent this problem? Is disabling HID on specific USB port sufficient? How do I do that?
Install USBGuard — it provides a framework for authorising USB devices before activating them. With the help of a tool such as USBGuard Notifier or the USBGuard Qt applet, it can pop up a notification when you connect a new device, asking you what to do; and it can store permanent rules for known devices so you don’t have to confirm over and over. Rules are defined using a comprehensive language with support for any USB attribute (including serial number, insertion port...), so you can write rules that are as specific as you want — whitelist this keyboard if it has this identifier, this serial number, is connected to this port, etc.
How to safely insert USB stick/device to Linux computer?
1,513,762,571,000
I need a software click debouncer solution for RHEL/CentOS. I'm getting intermittent, but frequent, double-clicks registered on single mouse clicks. The issue doesn't happen on Windows 10 as it seems Logitech (or Microsoft) compensate at the software level. Similar issues can be solved in Windows with a simple script using AutoHotKey like: LButton:: If (A_TimeSincePriorHotkey < 150) ;hyperclick Return Click Down KeyWait, LButton Click Up Return Or with Buggy-Mouse.ahk, but I haven't been able to find a maintained solution for RHEL/CentOS. There was a Linux port of AutoHotKey named IronAHK but it's last update on github was six years ago. There's an answer to a similar question at Avoid very fast double clicks but the provided solution is for Arch Linux.
This should be fixed with libinput 1.9. Announce: Pointer devices now have button debouncing automagically enabled. Ghost button release/press events due to worn out or bad-quality switches are transparently discarded and the device should just work.
Single clicks register as double-click - software click debounce in CentOS 7
1,513,762,571,000
I bought a new keyboard similar to an old one. The old one works, the new one not. The new keyboard has an unusual HID Descriptor and sends one extra data byte. Is there a Linux driver which support such keyboard (descriptor)? Communication log via cat /sys/kernel/debug/usb/usbmon/3u, byte sequences only (press A, release A; press Shift, press A, release A, release Shift sequence): Non-working 9-byte keyboard communication: 00000400 00000000 71 00000000 00000000 71 02000000 00000000 71 02000400 00000000 71 02000000 00000000 71 00000000 00000000 71 Working old keyboard communication: 00000400 00000000 00000000 00000000 02000000 00000000 02000400 00000000 02000000 00000000 00000000 00000000 Bonus: What is the extra byte? (I always saw 0x71 there.) HID Descriptors of the new keyboard via cat /sys/kernel/debug/hid/<interface>/rdesc: Interface 0003:17EF:609B.0089: 05 01 09 06 a1 01 05 07 19 e0 29 e7 15 00 25 01 75 01 95 08 81 02 75 08 95 01 81 01 05 08 19 01 29 03 75 01 95 03 91 02 95 01 75 05 91 01 15 00 26 ff 00 19 00 2a ff 00 05 07 75 08 95 06 81 00 05 01 0a 68 01 15 80 25 7f 95 01 75 08 81 02 c0 INPUT[INPUT] Field(0) Application(GenericDesktop.Keyboard) Usage(8) Keyboard.00e0 Keyboard.00e1 Keyboard.00e2 Keyboard.00e3 Keyboard.00e4 Keyboard.00e5 Keyboard.00e6 Keyboard.00e7 Logical Minimum(0) Logical Maximum(1) Report Size(1) Report Count(8) Report Offset(0) Flags( Variable Absolute ) Field(1) Application(GenericDesktop.Keyboard) Usage(256) LED.0000 LED.NumLock LED.CapsLock LED.ScrollLock LED.Compose LED.Kana LED.0006 LED.0007 LED.0008 LED.0009 ... LED sequence left out LED.0049 LED.004a LED.GenericIndicator LED.004c LED.004d LED.004e LED.004f ... LED sequence left out LED.00fd LED.00fe LED.00ff Logical Minimum(0) Logical Maximum(255) Report Size(8) Report Count(6) Report Offset(16) Flags( Array Absolute ) Field(2) Application(GenericDesktop.Keyboard) Usage(1) GenericDesktop.0168 Logical Minimum(-128) Logical Maximum(127) Report Size(8) Report Count(1) Report Offset(64) Flags( Variable Absolute ) OUTPUT[OUTPUT] Field(0) Application(GenericDesktop.Keyboard) Usage(3) LED.NumLock LED.CapsLock LED.ScrollLock Logical Minimum(0) Logical Maximum(1) Report Size(1) Report Count(3) Report Offset(0) Flags( Variable Absolute ) Keyboard.00e0 ---> Key.LeftControl Keyboard.00e1 ---> Key.LeftShift Keyboard.00e2 ---> Key.LeftAlt Keyboard.00e3 ---> Key.LeftMeta Keyboard.00e4 ---> Key.RightCtrl Keyboard.00e5 ---> Key.RightShift Keyboard.00e6 ---> Key.RightAlt Keyboard.00e7 ---> Key.RightMeta LED.0000 ---> Sync.Report LED.NumLock ---> LED.NumLock LED.CapsLock ---> LED.CapsLock LED.ScrollLock ---> LED.ScrollLock LED.Compose ---> LED.Compose LED.Kana ---> LED.Kana LED.0006 ---> Sync.Report LED.0007 ---> Sync.Report LED.0008 ---> Sync.Report LED.0009 ---> LED.Mute LED.000a ---> Sync.Report LED.000b ---> Sync.Report ... Sync.Report lines left out LED.0017 ---> Sync.Report LED.0018 ---> Sync.Report LED.0019 ---> LED.? LED.001a ---> Sync.Report LED.001b ---> Sync.Report ... Sync.Report lines left out LED.0025 ---> Sync.Report LED.0026 ---> Sync.Report LED.0027 ---> LED.Sleep LED.0028 ---> Sync.Report LED.0029 ---> Sync.Report ... Sync.Report lines left out LED.0049 ---> Sync.Report LED.004a ---> Sync.Report LED.GenericIndicator ---> LED.Misc LED.004c ---> LED.Suspend LED.004d ---> LED.? LED.004e ---> Sync.Report LED.004f ---> Sync.Report ... Sync.Report lines left out LED.00fe ---> Sync.Report LED.00ff ---> Sync.Report GenericDesktop.0168 ---> Absolute.Misc LED.NumLock ---> LED.? LED.CapsLock ---> LED.? LED.ScrollLock ---> LED.? Interface 0003:17EF:609B.008B: 05 0c 09 01 a1 01 85 01 19 00 2a 3c 02 15 00 26 3c 02 95 01 75 10 81 00 05 01 0a 68 01 15 80 25 7f 95 01 75 08 81 02 c0 05 01 09 80 a1 01 85 02 19 81 29 83 15 00 25 01 75 01 95 03 81 02 95 05 81 01 05 01 0a 68 01 15 80 25 7f 95 01 75 08 81 02 c0 06 01 ff 09 01 a1 01 85 05 95 07 75 08 15 00 26 ff 00 09 20 b1 03 c0 INPUT(1)[INPUT] Field(0) Application(Consumer.0001) Usage(573) Consumer.0000 Consumer.0001 Consumer.0002 Consumer.0003 Consumer.0004 ... Consumer sequence left out Consumer.0235 Consumer.0236 Consumer.0237 Consumer.HorizontalWheel Consumer.0239 Consumer.023a Consumer.023b Consumer.023c Logical Minimum(0) Logical Maximum(572) Report Size(16) Report Count(1) Report Offset(0) Flags( Array Absolute ) Field(1) Application(Consumer.0001) Usage(1) GenericDesktop.0168 Logical Minimum(-128) Logical Maximum(127) Report Size(8) Report Count(1) Report Offset(16) Flags( Variable Absolute ) INPUT(2)[INPUT] Field(0) Application(GenericDesktop.SystemControl) Usage(3) GenericDesktop.SystemPowerDown GenericDesktop.SystemSleep GenericDesktop.SystemWakeUp Logical Minimum(0) Logical Maximum(1) Report Size(1) Report Count(3) Report Offset(0) Flags( Variable Absolute ) Field(1) Application(GenericDesktop.SystemControl) Usage(1) GenericDesktop.0168 Logical Minimum(-128) Logical Maximum(127) Report Size(8) Report Count(1) Report Offset(8) Flags( Variable Absolute ) FEATURE(5)[FEATURE] Field(0) Application(ff01.0001) Usage(7) ff01.0020 ff01.0020 ff01.0020 ff01.0020 ff01.0020 ff01.0020 ff01.0020 Logical Minimum(0) Logical Maximum(255) Report Size(8) Report Count(7) Report Offset(0) Flags( Constant Variable Absolute ) Consumer.0000 ---> Sync.Report Consumer.0001 ---> Key.Unknown Consumer.0002 ---> Key.Unknown ... Key.Unknown lines left out Consumer.002e ---> Key.Unknown Consumer.002f ---> Key.Unknown Consumer.0030 ---> Key.Power Consumer.0031 ---> Key.Restart Consumer.0032 ---> Key.Sleep Consumer.0033 ---> Key.Unknown Consumer.0034 ---> Key.Sleep Consumer.0035 ---> Key.KbdIlluminationToggle Consumer.0036 ---> Key.Btn0 Consumer.0037 ---> Key.Unknown Consumer.0038 ---> Key.Unknown ... Key.Unknown lines left out Consumer.003e ---> Key.Unknown Consumer.003f ---> Key.Unknown Consumer.0040 ---> Key.Menu Consumer.0041 ---> Key.Select Consumer.0042 ---> Key.Up Consumer.0043 ---> Key.Down Consumer.0044 ---> Key.Left Consumer.0045 ---> Key.Right Consumer.0046 ---> Key.Esc Consumer.0047 ---> Key.KPPlus Consumer.0048 ---> Key.KPMinus Consumer.0049 ---> Key.Unknown Consumer.004a ---> Key.Unknown ... Key.Unknown lines left out Consumer.005e ---> Key.Unknown Consumer.005f ---> Key.Unknown Consumer.0060 ---> Key.Info Consumer.0061 ---> Key.Subtitle Consumer.0062 ---> Key.Unknown Consumer.0063 ---> Key.VCR Consumer.0064 ---> Key.Unknown Consumer.0065 ---> Key.Camera Consumer.0066 ---> Key.Unknown Consumer.0067 ---> Key.Unknown Consumer.0068 ---> Key.Unknown Consumer.0069 ---> Key.Red Consumer.006a ---> Key.Green Consumer.006b ---> Key.Blue Consumer.006c ---> Key.Yellow Consumer.006d ---> Key.Zoom Consumer.006e ---> Key.Unknown Consumer.006f ---> Key.BrightnessUp Consumer.0070 ---> Key.BrightnessDown Consumer.0071 ---> Key.Unknown Consumer.0072 ---> Key.? Consumer.0073 ---> Key.BrightnessMin Consumer.0074 ---> Key.BrightnessMax Consumer.0075 ---> Key.BrightnessAuto Consumer.0076 ---> Key.Unknown Consumer.0077 ---> Key.Unknown ... Key.Unknown lines left out Consumer.0080 ---> Key.Unknown Consumer.0081 ---> Key.Unknown Consumer.0082 ---> Key.? Consumer.0083 ---> Key.Last Consumer.0084 ---> Key.Enter Consumer.0085 ---> Key.Unknown Consumer.0086 ---> Key.Unknown Consumer.0087 ---> Key.Unknown Consumer.0088 ---> Key.PC Consumer.0089 ---> Key.TV Consumer.008a ---> Key.WWW Consumer.008b ---> Key.DVD Consumer.008c ---> Key.Phone Consumer.008d ---> Key.Program Consumer.008e ---> Key.? Consumer.008f ---> Key.? Consumer.0090 ---> Key.Memo Consumer.0091 ---> Key.CD Consumer.0092 ---> Key.VCR Consumer.0093 ---> Key.Tuner Consumer.0094 ---> Key.Exit Consumer.0095 ---> Key.Help Consumer.0096 ---> Key.Tape Consumer.0097 ---> Key.TV2 Consumer.0098 ---> Key.Sat Consumer.0099 ---> Key.Unknown Consumer.009a ---> Key.PVR Consumer.009b ---> Key.Unknown Consumer.009c ---> Key.ChannelUp Consumer.009d ---> Key.ChannelDown Consumer.009e ---> Key.Unknown Consumer.009f ---> Key.Unknown Consumer.00a0 ---> Key.VCR2 Consumer.00a1 ---> Key.Unknown Consumer.00a2 ---> Key.Unknown ... Key.Unknown lines left out Consumer.00ae ---> Key.Unknown Consumer.00af ---> Key.Unknown Consumer.00b0 ---> Key.Play Consumer.00b1 ---> Key.Pause Consumer.00b2 ---> Key.Record Consumer.00b3 ---> Key.FastForward Consumer.00b4 ---> Key.Rewind Consumer.00b5 ---> Key.NextSong Consumer.00b6 ---> Key.PreviousSong Consumer.00b7 ---> Key.StopCD Consumer.00b8 ---> Key.EjectCD Consumer.00b9 ---> Key.Shuffle Consumer.00ba ---> Key.Unknown Consumer.00bb ---> Key.Unknown Consumer.00bc ---> Key.? Consumer.00bd ---> Key.Unknown Consumer.00be ---> Key.Unknown Consumer.00bf ---> Key.Slow Consumer.00c0 ---> Key.Unknown Consumer.00c1 ---> Key.Unknown ... Key.Unknown lines left out Consumer.00cb ---> Key.Unknown Consumer.00cc ---> Key.Unknown Consumer.00cd ---> Key.PlayPause Consumer.00ce ---> Key.Unknown Consumer.00cf ---> Key.VoiceCommand Consumer.00d0 ---> Key.Unknown Consumer.00d1 ---> Key.Unknown ... Key.Unknown lines left out Consumer.00de ---> Key.Unknown Consumer.00df ---> Key.Unknown Consumer.00e0 ---> Absolute.Volume Consumer.00e1 ---> Key.Unknown Consumer.00e2 ---> Key.Mute Consumer.00e3 ---> Key.Unknown Consumer.00e4 ---> Key.Unknown Consumer.00e5 ---> Key.BassBoost Consumer.00e6 ---> Key.Unknown Consumer.00e7 ---> Key.Unknown Consumer.00e8 ---> Key.Unknown Consumer.00e9 ---> Key.VolumeUp Consumer.00ea ---> Key.VolumeDown Consumer.00eb ---> Key.Unknown Consumer.00ec ---> Key.Unknown ... Key.Unknown lines left out Consumer.00f3 ---> Key.Unknown Consumer.00f4 ---> Key.Unknown Consumer.00f5 ---> Key.Slow Consumer.00f6 ---> Key.Unknown Consumer.00f7 ---> Key.Unknown ... Key.Unknown lines left out Consumer.017f ---> Key.Unknown Consumer.0180 ---> Key.Unknown Consumer.0181 ---> Key.ButtonConfig Consumer.0182 ---> Key.Bookmarks Consumer.0183 ---> Key.Config Consumer.0184 ---> Key.? Consumer.0185 ---> Key.? Consumer.0186 ---> Key.? Consumer.0187 ---> Key.? Consumer.0188 ---> Key.? Consumer.0189 ---> Key.? Consumer.018a ---> Key.Mail Consumer.018b ---> Key.? Consumer.018c ---> Key.? Consumer.018d ---> Key.? Consumer.018e ---> Key.Calendar Consumer.018f ---> Key.TaskManager Consumer.0190 ---> Key.Journal Consumer.0191 ---> Key.Finance Consumer.0192 ---> Key.Calc Consumer.0193 ---> Key.Player Consumer.0194 ---> Key.File Consumer.0195 ---> Key.Unknown Consumer.0196 ---> Key.WWW Consumer.0197 ---> Key.Unknown Consumer.0198 ---> Key.Unknown Consumer.0199 ---> Key.Chat Consumer.019a ---> Key.Unknown Consumer.019b ---> Key.Unknown Consumer.019c ---> Key.Logoff Consumer.019d ---> Key.Unknown Consumer.019e ---> Key.Coffee Consumer.019f ---> Key.ControlPanel Consumer.01a0 ---> Key.Unknown Consumer.01a1 ---> Key.Unknown Consumer.01a2 ---> Key.AppSelect Consumer.01a3 ---> Key.Next Consumer.01a4 ---> Key.Previous Consumer.01a5 ---> Key.Unknown Consumer.01a6 ---> Key.Help Consumer.01a7 ---> Key.Documents Consumer.01a8 ---> Key.Unknown Consumer.01a9 ---> Key.Unknown Consumer.01aa ---> Key.Unknown Consumer.01ab ---> Key.SpellCheck Consumer.01ac ---> Key.Unknown Consumer.01ad ---> Key.Unknown Consumer.01ae ---> Key.Keyboard Consumer.01af ---> Key.Unknown Consumer.01b0 ---> Key.Unknown Consumer.01b1 ---> Key.ScreenSaver Consumer.01b2 ---> Key.Unknown Consumer.01b3 ---> Key.Unknown Consumer.01b4 ---> Key.File Consumer.01b5 ---> Key.Unknown Consumer.01b6 ---> Key.? Consumer.01b7 ---> Key.Audio Consumer.01b8 ---> Key.Video Consumer.01b9 ---> Key.Unknown Consumer.01ba ---> Key.Unknown Consumer.01bb ---> Key.Unknown Consumer.01bc ---> Key.? Consumer.01bd ---> Key.Info Consumer.01be ---> Key.Unknown Consumer.01bf ---> Key.Unknown ... Key.Unknown lines left out Consumer.01ff ---> Key.Unknown Consumer.0200 ---> Key.Unknown Consumer.0201 ---> Key.New Consumer.0202 ---> Key.Open Consumer.0203 ---> Key.Close Consumer.0204 ---> Key.Exit Consumer.0205 ---> Key.Unknown Consumer.0206 ---> Key.Unknown Consumer.0207 ---> Key.Save Consumer.0208 ---> Key.Print Consumer.0209 ---> Key.Props Consumer.020a ---> Key.Unknown Consumer.020b ---> Key.Unknown ... Key.Unknown lines left out Consumer.0218 ---> Key.Unknown Consumer.0219 ---> Key.Unknown Consumer.021a ---> Key.Undo Consumer.021b ---> Key.Copy Consumer.021c ---> Key.Cut Consumer.021d ---> Key.Paste Consumer.021e ---> Key.Unknown Consumer.021f ---> Key.Find Consumer.0220 ---> Key.Unknown Consumer.0221 ---> Key.Search Consumer.0222 ---> Key.Goto Consumer.0223 ---> Key.HomePage Consumer.0224 ---> Key.Back Consumer.0225 ---> Key.Forward Consumer.0226 ---> Key.Stop Consumer.0227 ---> Key.Refresh Consumer.0228 ---> Key.Unknown Consumer.0229 ---> Key.Unknown Consumer.022a ---> Key.Bookmarks Consumer.022b ---> Key.Unknown Consumer.022c ---> Key.Unknown Consumer.022d ---> Key.? Consumer.022e ---> Key.? Consumer.022f ---> Key.? Consumer.0230 ---> Key.Unknown Consumer.0231 ---> Key.Unknown Consumer.0232 ---> Key.Unknown Consumer.0233 ---> Key.ScrollUp Consumer.0234 ---> Key.ScrollDown Consumer.0235 ---> Key.Unknown Consumer.0236 ---> Key.Unknown Consumer.0237 ---> Key.Unknown Consumer.HorizontalWheel ---> Relative.HWheel Consumer.0239 ---> Key.Unknown Consumer.023a ---> Key.Unknown Consumer.023b ---> Key.Unknown Consumer.023c ---> Key.Unknown GenericDesktop.0168 ---> Absolute.Misc GenericDesktop.SystemPowerDown ---> Key.Power GenericDesktop.SystemSleep ---> Key.Sleep GenericDesktop.SystemWakeUp ---> Key.WakeUp GenericDesktop.0168 ---> Sync.Report HID Descriptor of the old keyboard, interface 0003:17EF:6022.0087 (only one keyboard interface): 05 01 09 06 a1 01 05 07 19 e0 29 e7 15 00 25 01 75 01 95 08 81 02 95 01 75 08 81 01 15 00 26 a4 00 19 00 2a a4 00 05 07 75 08 95 06 81 00 c0 INPUT[INPUT] Field(0) Application(GenericDesktop.Keyboard) Usage(8) Keyboard.00e0 Keyboard.00e1 Keyboard.00e2 Keyboard.00e3 Keyboard.00e4 Keyboard.00e5 Keyboard.00e6 Keyboard.00e7 Logical Minimum(0) Logical Maximum(1) Report Size(1) Report Count(8) Report Offset(0) Flags( Variable Absolute ) Field(1) Application(GenericDesktop.Keyboard) Usage(165) Keyboard.0000 Keyboard.0001 Keyboard.0002 Keyboard.0003 Keyboard.0004 ... Keyboard sequence left out Keyboard.009f Keyboard.00a0 Keyboard.00a1 Keyboard.00a2 Keyboard.00a3 Keyboard.00a4 Logical Minimum(0) Logical Maximum(164) Report Size(8) Report Count(6) Report Offset(16) Flags( Array Absolute ) Keyboard.00e0 ---> Key.LeftControl Keyboard.00e1 ---> Key.LeftShift Keyboard.00e2 ---> Key.LeftAlt Keyboard.00e3 ---> Key.LeftMeta Keyboard.00e4 ---> Key.RightCtrl Keyboard.00e5 ---> Key.RightShift Keyboard.00e6 ---> Key.RightAlt Keyboard.00e7 ---> Key.RightMeta Keyboard.0000 ---> Sync.Report Keyboard.0001 ---> Sync.Report Keyboard.0002 ---> Sync.Report Keyboard.0003 ---> Sync.Report Keyboard.0004 ---> Key.A Keyboard.0005 ---> Key.B Keyboard.0006 ---> Key.C Keyboard.0007 ---> Key.D Keyboard.0008 ---> Key.E Keyboard.0009 ---> Key.F Keyboard.000a ---> Key.G Keyboard.000b ---> Key.H Keyboard.000c ---> Key.I Keyboard.000d ---> Key.J Keyboard.000e ---> Key.K Keyboard.000f ---> Key.L Keyboard.0010 ---> Key.M Keyboard.0011 ---> Key.N Keyboard.0012 ---> Key.O Keyboard.0013 ---> Key.P Keyboard.0014 ---> Key.Q Keyboard.0015 ---> Key.R Keyboard.0016 ---> Key.S Keyboard.0017 ---> Key.T Keyboard.0018 ---> Key.U Keyboard.0019 ---> Key.V Keyboard.001a ---> Key.W Keyboard.001b ---> Key.X Keyboard.001c ---> Key.Y Keyboard.001d ---> Key.Z Keyboard.001e ---> Key.1 Keyboard.001f ---> Key.2 Keyboard.0020 ---> Key.3 Keyboard.0021 ---> Key.4 Keyboard.0022 ---> Key.5 Keyboard.0023 ---> Key.6 Keyboard.0024 ---> Key.7 Keyboard.0025 ---> Key.8 Keyboard.0026 ---> Key.9 Keyboard.0027 ---> Key.0 Keyboard.0028 ---> Key.Enter Keyboard.0029 ---> Key.Esc Keyboard.002a ---> Key.Backspace Keyboard.002b ---> Key.Tab Keyboard.002c ---> Key.Space Keyboard.002d ---> Key.Minus Keyboard.002e ---> Key.Equal Keyboard.002f ---> Key.LeftBrace Keyboard.0030 ---> Key.RightBrace Keyboard.0031 ---> Key.BackSlash Keyboard.0032 ---> Key.BackSlash Keyboard.0033 ---> Key.Semicolon Keyboard.0034 ---> Key.Apostrophe Keyboard.0035 ---> Key.Grave Keyboard.0036 ---> Key.Comma Keyboard.0037 ---> Key.Dot Keyboard.0038 ---> Key.Slash Keyboard.0039 ---> Key.CapsLock Keyboard.003a ---> Key.F1 Keyboard.003b ---> Key.F2 Keyboard.003c ---> Key.F3 Keyboard.003d ---> Key.F4 Keyboard.003e ---> Key.F5 Keyboard.003f ---> Key.F6 Keyboard.0040 ---> Key.F7 Keyboard.0041 ---> Key.F8 Keyboard.0042 ---> Key.F9 Keyboard.0043 ---> Key.F10 Keyboard.0044 ---> Key.F11 Keyboard.0045 ---> Key.F12 Keyboard.0046 ---> Key.SysRq Keyboard.0047 ---> Key.ScrollLock Keyboard.0048 ---> Key.Pause Keyboard.0049 ---> Key.Insert Keyboard.004a ---> Key.Home Keyboard.004b ---> Key.PageUp Keyboard.004c ---> Key.Delete Keyboard.004d ---> Key.End Keyboard.004e ---> Key.PageDown Keyboard.004f ---> Key.Right Keyboard.0050 ---> Key.Left Keyboard.0051 ---> Key.Down Keyboard.0052 ---> Key.Up Keyboard.0053 ---> Key.NumLock Keyboard.0054 ---> Key.KPSlash Keyboard.0055 ---> Key.KPAsterisk Keyboard.0056 ---> Key.KPMinus Keyboard.0057 ---> Key.KPPlus Keyboard.0058 ---> Key.KPEnter Keyboard.0059 ---> Key.KP1 Keyboard.005a ---> Key.KP2 Keyboard.005b ---> Key.KP3 Keyboard.005c ---> Key.KP4 Keyboard.005d ---> Key.KP5 Keyboard.005e ---> Key.KP6 Keyboard.005f ---> Key.KP7 Keyboard.0060 ---> Key.KP8 Keyboard.0061 ---> Key.KP9 Keyboard.0062 ---> Key.KP0 Keyboard.0063 ---> Key.KPDot Keyboard.0064 ---> Key.102nd Keyboard.0065 ---> Key.Compose Keyboard.0066 ---> Key.Power Keyboard.0067 ---> Key.KPEqual Keyboard.0068 ---> Key.F13 Keyboard.0069 ---> Key.F14 Keyboard.006a ---> Key.F15 Keyboard.006b ---> Key.F16 Keyboard.006c ---> Key.F17 Keyboard.006d ---> Key.F18 Keyboard.006e ---> Key.F19 Keyboard.006f ---> Key.F20 Keyboard.0070 ---> Key.F21 Keyboard.0071 ---> Key.F22 Keyboard.0072 ---> Key.F23 Keyboard.0073 ---> Key.F24 Keyboard.0074 ---> Key.Open Keyboard.0075 ---> Key.Help Keyboard.0076 ---> Key.Props Keyboard.0077 ---> Key.Front Keyboard.0078 ---> Key.Stop Keyboard.0079 ---> Key.Again Keyboard.007a ---> Key.Undo Keyboard.007b ---> Key.Cut Keyboard.007c ---> Key.Copy Keyboard.007d ---> Key.Paste Keyboard.007e ---> Key.Find Keyboard.007f ---> Key.Mute Keyboard.0080 ---> Key.VolumeUp Keyboard.0081 ---> Key.VolumeDown Keyboard.0082 ---> Key.Unknown Keyboard.0083 ---> Key.Unknown Keyboard.0084 ---> Key.Unknown Keyboard.0085 ---> Key.KPComma Keyboard.0086 ---> Key.Unknown Keyboard.0087 ---> Key.RO Keyboard.0088 ---> Key.Katakana/Hiragana Keyboard.0089 ---> Key.Yen Keyboard.008a ---> Key.Henkan Keyboard.008b ---> Key.Muhenkan Keyboard.008c ---> Key.KPJpComma Keyboard.008d ---> Key.Unknown Keyboard.008e ---> Key.Unknown Keyboard.008f ---> Key.Unknown Keyboard.0090 ---> Key.Hangeul Keyboard.0091 ---> Key.Hanja Keyboard.0092 ---> Key.Katakana Keyboard.0093 ---> Key.HIRAGANA Keyboard.0094 ---> Key.Zenkaku/Hankaku Keyboard.0095 ---> Key.Unknown Keyboard.0096 ---> Key.Unknown ... Key.Unknown lines left out Keyboard.009a ---> Key.Unknown Keyboard.009b ---> Key.Unknown Keyboard.009c ---> Key.Delete Keyboard.009d ---> Key.Unknown Keyboard.009e ---> Key.Unknown ... Key.Unknown lines left out Keyboard.00a3 ---> Key.Unknown Keyboard.00a4 ---> Key.Unknown (Issue experienced on Lenovo Professional Wireless Keyboard, an ASUS keyboard... both wireless keyboards manufactured by Primax.) (Issue not experienced on Windows and GRUB.) (Yves Trudeau has already played with it and implemented a driver which ignores the extra byte, but I would rather use something from vanilla kernel and something less hacky.)
A proper fix was merged into the Linux kernel: https://lkml.org/lkml/2019/3/27/350 It'll be available whenever version 5.2 comes out, and probably back-ported on some distributions.
Linux HID driver for Primax wireless keyboards
1,513,762,571,000
I've been trying for a while but have not been able to find a way to control the lights on a set of controllers from the game Buzz (wired, from Playstation 2). You can see some of my failed attempts in my questions over on Stack Overflow Ruby libusb: Stall error Sending HID defined messages with usblib So I turned to a more base linux method of sending messages, and failed to do it by piping data to /dev/hidraw0, too. Then I discovered a file in the linux repository which refers to the buzz controllers specifically (/linux/drivers/hid/hid-sony.c), and the fact that they have a light. It even has a method called buzz_set_leds (line 1512): static void buzz_set_leds(struct sony_sc *sc) So I'm 100% sure that this is the code does what I'm trying to do. I've had a go at including this in a c file, but am unable to include hid-sony because I seem to be missing these files. #include <linux/device.h> #include <linux/hid.h> #include <linux/module.h> #include <linux/slab.h> #include <linux/leds.h> #include <linux/power_supply.h> #include <linux/spinlock.h> #include <linux/list.h> #include <linux/idr.h> #include <linux/input/mt.h> #include "hid-ids.h" In compilation, I get this error: hid-sony.c:29:26: fatal error: linux/device.h: No such file or directory #include <linux/device.h> ^ compilation terminated. Sorry, I'm a Ruby programmer with no background in C. How do I get these missing 'linux/' files and refer to them from my c library - or how can I write to the controllers from the shell?
with the sony driver loaded the driver provides standard led kernel interfaces: echo 255 > /sys/class/leds/*buzz1/brightness echo 0 > /sys/class/leds/*buzz1/brightness
How can I write to the Buzz controllers HID device created by hid-sony.c to work the LEDs?
1,513,762,571,000
OS: Ubuntu 18.04.3 Kernel: 5.3.8 Hi guys :) I'm trying to create bunch of HID gadgets by using configfs. It was successful until setting up fourth gadget, but kernel emits error message during creation of fifth gadget. Error message was as below. # 4 successive gadget creation g_mouse1 : /dev/hidg0 g_mouse2 : /dev/hidg1 g_mouse3 : /dev/hidg2 g_kbd1 : /dev/hidg3 # error occured mkdir: cannot create directory ‘/config/usb_gadget/g_kbd2/functions/hid.usb0’: No such device It seems like HID function cannot be created anymore. So my question is "Is the number of gadget limited?" and "If user can adjust limit, how could it be?" According to further research, I found out that mass_storage function can be created up to 5, midi function can be created more than 10. So the specific limit exists for each USB classes. However, my project requires beyond HID class' limitation. Does anyone know the way to manipulate those limits? Thanks for @mosvy! Problem solved by this way. Change a value of HIDG_MINORS in /usr/src/linux-$(uname -r)/drivers/usb/gadget/function/f_hid.c. Recompile kernel module /usr/src/linux-$(uname -r)/drivers/usb/gadget. Kernel Modules which need to be updated are as follows. udc_core libcomposite usb_f_hid Now you can create HID gadget up to HIDG_MINORS
Yes, you can only create 4 HID gadgets, and it's a hard-coded limit: the only way to bypass it is by modifying the code and recompiling the usb_f_hid.ko module. This limitation has to do with how Linux allocates dynamic major/minor numbers for the /dev/hidg# devices. From drivers/usb/gadget/function/f_hid.c: #define HIDG_MINORS 4 static inline int hidg_get_minor(void) { ... if (ret >= HIDG_MINORS) { ida_simple_remove(&hidg_ida, ret); ret = -ENODEV; static struct usb_function_instance *hidg_alloc_inst(void) { ... status = ghid_setup(NULL, HIDG_MINORS); int ghid_setup(struct usb_gadget *g, int count) { ... status = alloc_chrdev_region(&dev, 0, count, "hidg"); Similar limitations exists for other gadgets which create device nodes (/dev/g_printer# = printer, /dev/ttyGS# = gser + obex + acm, etc).
Is there a limit to the number of USB gadget can be created with configfs?
1,513,762,571,000
I have read that some USB devices emulate a keyboard and the information these devices send will be as if the information was typed on a keyboard. For example: a magnetic card reader can use an emulated keyboard to give information about the card. This is a question I had asked about keyboard, BT keyboard and stdin which explains how they work. So where does an application have to listen to the input generated by an emulated keyboard?
If you hook up two USB keyboards to your system, or a USB keyboard to a laptop with built-in keyboard, you can alternately type characters¹ on each one (or use the left on one keyboard and the right on the other. The emulating devices have nothing more to do than tell the system they are a keyboard, just like a keyboard would do and characters coming from the device will be inserted in the right queue. The application just listens like it would for normal keyboard input. There are other ways to get the same result, I used to have a barcode scanner from before the USB era, that had to be physically inserted between the keyboard and the mainboard (using female and male PS/2 connectors), one scan would act as if you pressed the number sequence of the barcode in quick succession. ¹ Special keys like Fn modify the keycode of other keys sent by the keyboard, so you cannot press Fn on one keyboard and expect the key on the other keyboard to be modified.
How does an emulated keyboard work?
1,513,762,571,000
I have a Lenovo Duet 3 Bluetooth keyboard, which works fine when connected physically (it has 5 pins for that) to its laptop, and also works as expected when I connect it to my Android phone. However, I cannot get it to work under (Arch) Linux. Kernel and bluetooth stack (bluez-libs etc.) are up to date, so I connect the device using bluetoothctl (output abbreviated for clarity): [bluetooth]# power on Changing power on succeeded [bluetooth]# scan on Discovery started [NEW] Device D6:45:02:72:41:4F Duet 3 BT [bluetooth]# pair D6:45:02:72:41:4F Attempting to pair with D6:45:02:72:41:4F [CHG] Device D6:45:02:72:41:4F Connected: yes [CHG] Device D6:45:02:72:41:4F ServicesResolved: yes [CHG] Device D6:45:02:72:41:4F Paired: yes [NEW] Primary Service (Handle 0x0000) /org/bluez/hci0/dev_D6_45_02_72_41_4F/service000a 00001801-0000-1000-8000-00805f9b34fb Generic Attribute Profile ... {more new services follow, e.g. for Dev. Information, Battery etc.} Pairing successful [Duet 3 BT]# trust D6:45:02:72:41:4F Changing D6:45:02:72:41:4F trust succeeded [Duet 3 BT]# connect D6:45:02:72:41:4F Attempting to connect to D6:45:02:72:41:4F Connection successful [Lenovo Duet 3 BT Folio]# The device stays connected, and I can see battery information. So far, so good, but typing anything or using the trackpad does absolutely nothing, so it's pretty useless as an input device.
Try to turn on Caps Lock before you detach the keyboard.
Bluetooth keyboard connects, but does not work
1,513,762,571,000
I have a USB barcode scanner and am running a python script that collects data from /dev/hidraw0 and inputs the data into a database. The issue is that every time the scanner collects a code it additionally send it to the terminal and actually tries to log on to the system via the tty. Is there a way to disable the HID from accessing the terminal and trying to log on, but still allowing the python script to collect the data? Thank you in advance for any help that you can provide.
Open /dev/input/path-to-your-scanner with the grab option. Use the path with symlinks that are constant across boots, not /dev/input/eventX. See e.g. here for a Python evdev library that makes it easy to do from Python. You cannot grab on the hidraw level, and unless you need the HID reports themselves for some reason, this is not necessary. If you do need the hidraw level, then it will get tricky - you'll have to disengage the hidraw level from feeding into the input level. Or maybe open both the input device and the hidraw device, I've never tried that.
How to direct /dev/hidraw output to python application and not terminal
1,513,762,571,000
I'm looking for a way to replace my keyboard kernel module to a custom one. I have a Logitech MK710 keyboard + mouse set for this purpose, with a USB receiver with those 2 interfaces. Automatically, this USB receiver is managed by default usb, usbhid or logitech-hidpp-device modules, there is some information (note: 1-2 is the receiver device): ubuntu@ubuntu-VirtualBox:/sys/bus/usb/devices/1-2$ tree | grep driver │   ├── driver -> ../../../../../../bus/usb/drivers/usbhid │   ├── driver -> ../../../../../../bus/usb/drivers/usbhid │   │   │   ├── driver -> ../../../../../../../../bus/hid/drivers/logitech-hidpp-device │   │   │   ├── driver -> ../../../../../../../../bus/hid/drivers/logitech-hidpp-device │   │   ├── driver -> ../../../../../../../bus/hid/drivers/logitech-djreceiver │   ├── driver -> ../../../../../../bus/usb/drivers/usbhid ├── driver -> ../../../../../bus/usb/drivers/usb What I want to achieve is write a proper module which would be chosen by a kernel instead of those default drivers. I think it's a matter of writing a proper module alias, but I'm not sure because nothing worked yet. Things I already tried are: put my module inside /lib/modules/$(uname -r)/kernel/drivers directory (I created my own custom subdir inside and put the .ko file there) use a proper alias in the module C code, I tried all options listed below (note: USB_VENDOR_ID and USB_PRODUCT_ID are macros used by me and their values are set properly for my specific device): static struct hid_device_id mod_table [] = { { HID_DEVICE(HID_BUS_ANY, HID_GROUP_ANY, USB_VENDOR_ID, USB_PRODUCT_ID) }, { } /* Terminating entry */ }; MODULE_DEVICE_TABLE(hid, mod_table); or static struct hid_device_id mod_table [] = { { HID_USB_DEVICE(USB_VENDOR_ID, USB_PRODUCT_ID) }, { } /* Terminating entry */ }; MODULE_DEVICE_TABLE(hid, mod_table); and static struct usb_device_id mod_table [] = { { USB_DEVICE(USB_VENDOR_ID, USB_PRODUCT_ID) }, { } /* Terminating entry */ }; MODULE_DEVICE_TABLE(usb, mod_table); remove original (default) HID drivers from /lib/modules/$(uname -r)/kernel/drivers directory (those 3 I specified at the top). Yet still kernel chooses to load original modules instead of my own. I even made sure that only my driver's alias specifies the vendor and product IDs (checking it in modules.alias file), but nothing works. The module starts to work only when I decide to detach the kernel drivers manually from user space by libusb library (using libusb_detach_kernel_driver function) and reload my own custom module - only then the kernel associates the device with my driver, but that's only till the next boot. I'd like to make it permanent, or even automatic. I hope the whole concept is understandable and is not too big of a mess. Thanks in advance.
Most likely you are being tripped up by initramfs: a copy of the original HID driver module has been stored in there when your current kernel was installed, and if you haven't regenerated initramfs when adding your module, your customized one won't be in there. At boot time, the USB support modules are among the first to be loaded, when the system is still running on initramfs and the real root filesystem has not been mounted yet. So the system is still finding & loading the original usbhid + logitech-hidpp-device module combination. You seem to be using Ubuntu, so the Debian-style sudo update-initramfs -u command should be enough to rebuild the initramfs of the current kernel version using the current set of modules and other configuration files.
Replace HID device driver with custom one
1,513,762,571,000
I have a gamma spectrometer that connects as a USB HID. When it is inserted dmesg helpfully informs me that two device files were made for it, hiddev0 and hidraw2 (obviously, the numbering isn't important.) Based on the documentation and a visual inspection of the bytes, I want to be reading from hidraw2. But I am curious what sort of data is coming through hiddev0, because I was stuck trying to figure it out for a while before I noticed hidraw2 existed. Here is some example data from hiddev0. 00000000 01 00 00 ff 0d 00 00 00 01 00 00 ff 81 00 00 00 |................| 00000010 01 00 00 ff 0b 00 00 00 01 00 00 ff 00 00 00 00 |................| 00000020 01 00 00 ff 0e 00 00 00 01 00 00 ff c1 00 00 00 |................| 00000030 01 00 00 ff 08 00 00 00 01 00 00 ff 01 00 00 00 |................| 00000040 01 00 00 ff 08 00 00 00 01 00 00 ff 41 00 00 00 |............A...| 00000050 01 00 00 ff 0b 00 00 00 01 00 00 ff 31 00 00 00 |............1...| 00000060 01 00 00 ff 07 00 00 00 01 00 00 ff b1 00 00 00 |................| 00000070 01 00 00 ff 09 00 00 00 01 00 00 ff 01 00 00 00 |................| 00000080 01 00 00 ff 08 00 00 00 01 00 00 ff b1 00 00 00 |................| 00000090 01 00 00 ff 08 00 00 00 01 00 00 ff 51 00 00 00 |............Q...| 000000a0 01 00 00 ff 1d 00 00 00 01 00 00 ff 51 00 00 00 |............Q...| 000000b0 01 00 00 ff 0a 00 00 00 01 00 00 ff f1 00 00 00 |................| 000000c0 01 00 00 ff 08 00 00 00 01 00 00 ff 51 00 00 00 |............Q...| 000000d0 01 00 00 ff 34 00 00 00 01 00 00 ff 91 00 00 00 |....4...........| As requested, here is the line from dmesg. [411407.529580] hid-generic 0003:04D8:0023.0003: hiddev0,hidraw2: USB HID v1.01 Device [Kromek SIGMA50] on usb-0000:00:1a.1-2/input0
Partial answer: The driver is hid-generic, so the next step is to look at the HID descriptor. As root, do mount -t debugfs none /sys/kernel/debug And then look at the contents of /sys/kernel/debug/hid/<dev>/rdesc, where <dev> identifies your device. The HID descriptor describes the format of what you can read from and write to the hidraw descriptor (maybe also important for you). These are processed by the kernel HID parser, and then sent to the hiddev descriptor. At least for input devices, the above file also has information about what the kernel parser does with the information, but I'm not sure what happens for hid-generic. Comparing what you see on hidraw and hiddev should allow some pretty good guesses about what the kernel parser does, and in doubt one can read the source. You can find more information in Documentation/hid/hidraw.txt and /Documentation/hid/hiddev.txt in the kernel sources. The "hidpage" from the comments has the HID standards, if you want to read those.
Format of hiddev bytes?
1,513,762,571,000
I have a USB device which communicates with a wireless handheld remote (Dupad G20S Pro Plus). It works great on my debian box. The problem I am trying to solve is preventing the power button on the remote from shutting down the system (I guess the remote is more intended for smart TVs). I did at least figure out via lsusb the offending device capability is: % lsusb -vd 4842:0001 ... Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 2 bAlternateSetting 0 bNumEndpoints 2 bInterfaceClass 3 Human Interface Device bInterfaceSubClass 0 bInterfaceProtocol 1 Keyboard iInterface 0 HID Device Descriptor: bLength 9 bDescriptorType 33 bcdHID 2.01 bCountryCode 0 Not supported bNumDescriptors 1 bDescriptorType 34 Report wDescriptorLength 121 Report Descriptor: (length is 121) ... Item(Local ): Usage, data= [ 0x81 ] 129 System Power Down ... Is there a way to block this capability at some kernel level? I did find I can block shutdowns in general with systemd-inhibit, but would love to use something lower level (like udev). Update: This is for a debian server (not desktop). The key events are being captured for home automation purposes.
The clue to the solution (for a systemd based linux host) comes from man logind.conf(8). Only input devices with the "power-switch" udev tag will be watched for key/lid switch events. Indeed this tag is added by the default udev rules: SUBSYSTEM=="input", KERNEL=="event*", ENV{ID_INPUT_KEY}=="1", TAG+="power-switch" I was able to block the action by commenting the rules out in that file (/etc/udev/rules.d/70-power-switch.rules). Additional Info: My original fix attempt was to remove the tag with this new rule in a new file /etc/udev/rules.d/80-power-switch.rules (processed after 70-power-switch.rules): SUBSYSTEM=="input", KERNEL=="event*", TAG-="power-switch" Even though my version of systemd supported tag removal (249), I could not get it to work. My best guess is systemd has already been alerted to the original tag, and removing the tag is not supported.
Is it possible to block capabilities in a USB device?
1,513,762,571,000
I have a Kortek touch screen that is exposed to the system, as shown in /proc/bus/input/devices, via two drivers : hid-generic and hid-multitouch I do not want hid-multitouch driver to expose Kortek touchscreen meaning I want to disable Kortek from hid-multitouch. Is there a way I can do this ? can I use quirks ? and if yes, how ?
While running your kernel, you could unbind the driver from the device, or you could remove your device from the mt_devices id_table inside drivers/hid/hid-multitouch.c in the linux kernel source.
How to disable a device (hardware) from using hid-multitouch driver?
1,513,762,571,000
When a USB mouse is connected how does the system tell it's a mouse? Does it send some signal? I need to implement (something like) a little mouse using an fpga board. I can output x and y coordinates from the board. How do I take the input x and y coordinates from the board and tell the system to control the mouse using them? I think I need to tell the system to treat the board as a mouse. How do I do that? To be exact: it's a touch screen using proximity sensors. Edit: The board is an "Altera Cyclone 4 DE2-115" Edit2: We're using Verilog
When a USB mouse is connected how does the system tell it's a mouse? Does it send some signal? Yes, it sends a USB descriptor, from which the host can tell that it is a mouse and how it expects the host to start reading input from it. How do I take the input x and y coordinates from the board and tell the system to control the mouse using them? Making it a proper USB device is a possible solution, which might even get you extra credit. Do not underestimate the complexity of implementing USB, though. Especially without hardware support (it can be done though bit-banging, e.g.: http://hackaday.com/2014/03/22/bitbanging-usb-on-low-power-arms/) The simplest way is to implement a PS/2 mouse rather than a USB mouse. You can include a PS/2 to USB converter with your project. Another way is to implement a serial mouse.
How is a mouse identified? Then how do I "implement a mouse"?
1,513,762,571,000
I am trying to understand the touchscreens. And I came across these two kernel modules: usbtouchscreen and usbhid. I am confused as what is exactly the difference ? Lets assume I have a touchscreen connected to my hardware via USB, which among the two should I be using ? I know the obvious answer would be: try installing either and see if it works. But what I am looking for is : what is the data sent by these two drivers for a USB touchscreen, in case of either USBHID and plain usbtouchscreen? And how does evdevconvert those different data packets/info into unified touch events ?
A HID (“human interface device”) is a device that is intended to allow humans to interact with the computer, such as a keyboard, a mouse, a monitor, a microphone, a loudspeaker, etc. USB defines a number of standard device classes: types of devices with some common properties. One of them is HID, which in the context of USB only covers low-bandwidth devices: mostly input devices such as keyboards, mice, joysticks, touchscreen input, etc. A touchscreen requires features that are not in the basic HID protocol (at least if it supports multitouch), so touchscreens can't be handled by a pure HID driver. Linux has a usbtouchscreen module which supports many USB touchscreen models. In any case, USB devices identify themselves, and USB drivers know what device identification they support. Linux automatically loads the right driver for USB devices. See Are driver modules loaded and unloaded automatically? and Debian does not detect serial PCI card after reboot
what is the difference between usbtouchscreen and usbhid?
1,513,762,571,000
So, I've recently purchased the named keyboard and have been doing some reverse engineering as to how the Logitech Gaming Software does things with it. In this process I've discovered that a few magic packets are sent to the device to unbind the default f1-6 from g1-6; however after this part things get tricky. None of the special keys (m1-3, mr, g1-6) report any scancode according to any standard tool, and that they all send hid reports on the same usage, ff00.0003, using bitwise logic. Each key sends an hid report in the following format: 03 gg mm where gg is g# = (0x01 << #-1) and mm is m# = (0x01 << #-1) [mr treated as m4 for this math), so pressing g1 and g2 at the same time yields 04 03 01 and so on; the values are ANDd together. As such, I cannot find any particularly useful way of mapping these hid reports to a known scancode (say, BTN_TRIGGER_HAPPY?) for easy userspace remapping with xbindkeys or the like. You can find an extensive dump of information on this keyboard at https://github.com/GSeriesDev/gseries-tools/blob/master/g105/info , if its of any help.
There is now a Linux driver for the Logitech G105 Keyboard, it's called sidewinderd, available on github.
Map non-standard hid reports to scancodes for Logitech G105 Gaming Keyboard
1,513,762,571,000
I'm having a weird problem. I've done some hacking based on another person's work to backport support for the internal keyboard on a MacBook Pro 11,5 into kernel 3.19. My GitHub source can be found here. I've done everything I can to ensure that it's as close to kernel 4.2 as possible while still being able to compile and work as expected on 3.19. However, while booting into 4.2 gives me perfect functionality working as expected, my module doesn't seem to do anything. Existing Apple devices work as expected, but I'm still having the same problems with my built-in keyboard. The problems are based around the fact that the function key doesn't work, and therefore I can't use my media keys. I've also done sanity testing to make sure that other Apple keyboards do work (tested with Apple wired and wireless keyboard and both work properly). Is there a way for me to validate that my keyboard is being bound to the right driver? The USB id for the device is 05ac:0274, and a config line can be found for that device in hid-ids.h:147 and in hid-apple.c:553-554. I'm convinced that it's just not picking up the device, because even with the hid-apple module removed, my built-in keyboard works though the other ones don't. How can I debug what's happening and why my built-in keyboard isn't getting bound to the hid-apple module? EDIT: I was able to get my keyboard bound to the right driver using the following: # unbind everything matching 05AC:0274 from hid-generic for dev in `ls /sys/bus/hid/drivers/hid-generic/ | egrep 05AC\:0274`; do echo -n $dev | sudo tee /sys/bus/hid/drivers/hid-generic/unbind done # bind everything matching 05AC:0274 to hid-apple for dev in `ls /sys/bus/hid/devices/ | egrep 05AC:0274` ; do echo -n $dev | sudo tee /sys/bus/hid/drivers/hid-apple/bind done The problem remains: how do I force a given USB id to associate with a given driver? I'll accept the given answer below, but I'm still looking for a solution...
There is an excellent answer here. The short answer is the command usb-devices (available for most distros in a package called usbutils or something similar) will should give you the info you want on the current driver each usb device is using.
Determine which module is bound by a HID device?
1,513,762,571,000
My Logitech Wave Cordless keyboard presents itself as two devices to the kernel. One is a regular keyboard which works fine, but all the additional keys appear as an event-mouse, such that cat /dev/input/by-id/usb-Logitech_USB_Receiver-if01-event-mouse produces the expected garbage when the buttons are pressed, but xev doesn't register anything at all. I've tried hidpoint which doesn't want to run on OpenSuse Tumbleweed, and I've tried usbhid.quirks=0x46d:0xc517:0x40 on the kernel parameters to force 'multi-identity' recognition, but I'm out of my depth at this point so may well not properly understand what I'm doing Any suggestions about how best to persuade the kernel to recognise the extra device as a keyboard rather than a mouse? Further info as requested: lsusb: Bus 001 Device 007: ID 046d:c517 Logitech, Inc. LX710 Cordless Desktop Laser dmsg: usb 1-6: new low-speed USB device number 7 using xhci_hcd usb 1-6: New USB device found, idVendor=046d, idProduct=c517 usb 1-6: New USB device strings: Mfr=1, Product=2, SerialNumber=0 usb 1-6: Product: USB Receiver usb 1-6: Manufacturer: Logitech input: Logitech USB Receiver as /devices/pci0000:00/0000:00:14.0/usb1/1-6/1-6:1.0/0003:046D:C517.0009/input/input14 logitech 0003:046D:C517.0009: input,hidraw3: USB HID v1.10 Keyboard [Logitech USB Receiver] on usb-0000:00:14.0-6/input0 logitech 0003:046D:C517.000A: fixing up Logitech keyboard report descriptor input: Logitech USB Receiver as /devices/pci0000:00/0000:00:14.0/usb1/1-6/1-6:1.1/0003:046D:C517.000A/input/input15 logitech 0003:046D:C517.000A: input,hiddev0,hidraw4: USB HID v1.10 Mouse [Logitech USB Receiver] on usb-0000:00:14.0-6/input1 $ evtest /dev/input/by-id/usb-Logitech_USB_Receiver-if01-event-mouse > evtestdump Input driver version is 1.0.1 Input device ID: bus 0x3 vendor 0x46d product 0xc517 version 0x110 Input device name: "Logitech USB Receiver" Supported events: Event type 0 (EV_SYN) Event type 1 (EV_KEY) Event code 1 (KEY_ESC) Event code 28 (KEY_ENTER) Event code 74 (KEY_KPMINUS) Event code 78 (KEY_KPPLUS) Event code 103 (KEY_UP) ... Event code 241 (KEY_VIDEO_NEXT) Event code 244 (KEY_BRIGHTNESS_ZERO) Event code 256 (BTN_0) Event code 272 (BTN_LEFT) Event code 273 (BTN_RIGHT) Event code 274 (BTN_MIDDLE) Event code 275 (BTN_SIDE) Event code 276 (BTN_EXTRA) Event code 277 (BTN_FORWARD) Event code 278 (BTN_BACK) Event code 279 (BTN_TASK) Event code 352 (KEY_OK) Event code 353 (KEY_SELECT) Event code 354 (KEY_GOTO) Event code 358 (KEY_INFO) Event code 362 (KEY_PROGRAM) Event code 366 (KEY_PVR) Event code 370 (KEY_SUBTITLE) Event code 371 (KEY_ANGLE) Event code 372 (KEY_ZOOM) Event code 374 (KEY_KEYBOARD) Event code 376 (KEY_PC) Event code 377 (KEY_TV) Event code 378 (KEY_TV2) ... Event code 431 (KEY_DISPLAYTOGGLE) Event code 432 (KEY_SPELLCHECK) Event code 433 (KEY_LOGOFF) Event code 439 (KEY_MEDIA_REPEAT) Event code 442 (KEY_IMAGES) Event code 478 (KEY_FN_1) Event code 479 (KEY_FN_2) Event code 576 (KEY_BUTTONCONFIG) Event code 577 (KEY_TASKMANAGER) Event code 578 (KEY_JOURNAL) Event code 579 (KEY_CONTROLPANEL) Event code 580 (KEY_APPSELECT) Event code 581 (KEY_SCREENSAVER) Event code 582 (KEY_VOICECOMMAND) Event code 592 (KEY_BRIGHTNESS_MIN) Event code 593 (KEY_BRIGHTNESS_MAX) Event code 608 (KEY_KBDINPUTASSIST_PREV) Event code 609 (KEY_KBDINPUTASSIST_NEXT) Event code 610 (KEY_KBDINPUTASSIST_PREVGROUP) Event code 611 (KEY_KBDINPUTASSIST_NEXTGROUP) Event code 612 (KEY_KBDINPUTASSIST_ACCEPT) Event code 613 (KEY_KBDINPUTASSIST_CANCEL) Event type 2 (EV_REL) Event code 0 (REL_X) Event code 1 (REL_Y) Event code 6 (REL_HWHEEL) Event code 7 (REL_DIAL) Event code 8 (REL_WHEEL) Event type 3 (EV_ABS) Event code 32 (ABS_VOLUME) Value 0 Min 1 Max 4173 Event type 4 (EV_MSC) Event code 4 (MSC_SCAN) Properties: Testing ... (interrupt to exit) Event: time 1498324926.500910, type 4 (EV_MSC), code 4 (MSC_SCAN), value c101c Event: time 1498324926.500910, type 1 (EV_KEY), code 154 (KEY_CYCLEWINDOWS), value 1 Event: time 1498324926.500910, -------------- SYN_REPORT ------------ Event: time 1498324926.644944, type 4 (EV_MSC), code 4 (MSC_SCAN), value c101c Event: time 1498324926.644944, type 1 (EV_KEY), code 154 (KEY_CYCLEWINDOWS), value 0 Event: time 1498324926.644944, -------------- SYN_REPORT ------------ Event: time 1498324926.932933, type 4 (EV_MSC), code 4 (MSC_SCAN), value c101f Event: time 1498324926.932933, type 1 (EV_KEY), code 419 (KEY_ZOOMOUT), value 1 Event: time 1498324926.932933, -------------- SYN_REPORT ------------ Event: time 1498324927.052921, type 4 (EV_MSC), code 4 (MSC_SCAN), value c101f Event: time 1498324927.052921, type 1 (EV_KEY), code 419 (KEY_ZOOMOUT), value 0 Event: time 1498324927.052921, -------------- SYN_REPORT ------------ Event: time 1498324927.396932, type 4 (EV_MSC), code 4 (MSC_SCAN), value c1020 Event: time 1498324927.396932, type 1 (EV_KEY), code 418 (KEY_ZOOMIN), value 1 Event: time 1498324927.396932, -------------- SYN_REPORT ------------ Event: time 1498324927.548930, type 4 (EV_MSC), code 4 (MSC_SCAN), value c1020 Event: time 1498324927.548930, type 1 (EV_KEY), code 418 (KEY_ZOOMIN), value 0 Event: time 1498324927.548930, -------------- SYN_REPORT ------------ Event: time 1498324927.916944, type 4 (EV_MSC), code 4 (MSC_SCAN), value c103d Event: time 1498324927.916944, type 1 (EV_KEY), code 240 (KEY_UNKNOWN), value 1 Event: time 1498324927.916944, -------------- SYN_REPORT ------------ Event: time 1498324928.084925, type 4 (EV_MSC), code 4 (MSC_SCAN), value c103d Event: time 1498324928.084925, type 1 (EV_KEY), code 240 (KEY_UNKNOWN), value 0 Event: time 1498324928.084925, -------------- SYN_REPORT ------------ Event: time 1498324928.460914, type 4 (EV_MSC), code 4 (MSC_SCAN), value c1005 Event: time 1498324928.460914, type 1 (EV_KEY), code 212 (KEY_CAMERA), value 1 Event: time 1498324928.460914, -------------- SYN_REPORT ------------ Event: time 1498324928.628903, type 4 (EV_MSC), code 4 (MSC_SCAN), value c1005 Event: time 1498324928.628903, type 1 (EV_KEY), code 212 (KEY_CAMERA), value 0 Event: time 1498324928.628903, -------------- SYN_REPORT ------------ Event: time 1498324930.876924, type 4 (EV_MSC), code 4 (MSC_SCAN), value c00b6 Event: time 1498324930.876924, type 1 (EV_KEY), code 165 (KEY_PREVIOUSSONG), value 1 Event: time 1498324930.876924, -------------- SYN_REPORT ------------ Event: time 1498324930.908915, type 4 (EV_MSC), code 4 (MSC_SCAN), value c00b6 Event: time 1498324930.908915, type 1 (EV_KEY), code 165 (KEY_PREVIOUSSONG), value 0 Event: time 1498324930.908915, -------------- SYN_REPORT ------------ Event: time 1498324931.684927, type 4 (EV_MSC), code 4 (MSC_SCAN), value c00b5 Event: time 1498324931.684927, type 1 (EV_KEY), code 163 (KEY_NEXTSONG), value 1 Event: time 1498324931.684927, -------------- SYN_REPORT ------------ Event: time 1498324931.724935, type 4 (EV_MSC), code 4 (MSC_SCAN), value c00b5 Event: time 1498324931.724935, type 1 (EV_KEY), code 163 (KEY_NEXTSONG), value 0 Event: time 1498324931.724935, -------------- SYN_REPORT ------------ Event: time 1498324932.652916, type 4 (EV_MSC), code 4 (MSC_SCAN), value c0183 Event: time 1498324932.652916, type 1 (EV_KEY), code 226 (KEY_MEDIA), value 1 Event: time 1498324932.652916, -------------- SYN_REPORT ------------ Event: time 1498324932.812954, type 4 (EV_MSC), code 4 (MSC_SCAN), value c0183 Event: time 1498324932.812954, type 1 (EV_KEY), code 226 (KEY_MEDIA), value 0 Event: time 1498324932.812954, -------------- SYN_REPORT ------------ Event: time 1498324933.748907, type 4 (EV_MSC), code 4 (MSC_SCAN), value c0192 Event: time 1498324933.748907, type 1 (EV_KEY), code 140 (KEY_CALC), value 1 Event: time 1498324933.748907, -------------- SYN_REPORT ------------ Event: time 1498324933.884934, type 4 (EV_MSC), code 4 (MSC_SCAN), value c0192 Event: time 1498324933.884934, type 1 (EV_KEY), code 140 (KEY_CALC), value 0 Event: time 1498324933.884934, -------------- SYN_REPORT ------------ Event: time 1498324938.084936, type 4 (EV_MSC), code 4 (MSC_SCAN), value 10082 Event: time 1498324938.084936, type 1 (EV_KEY), code 142 (KEY_SLEEP), value 1 Event: time 1498324938.084936, -------------- SYN_REPORT ------------ Event: time 1498324938.100912, type 4 (EV_MSC), code 4 (MSC_SCAN), value 10082 Event: time 1498324938.100912, type 1 (EV_KEY), code 142 (KEY_SLEEP), value 0 Event: time 1498324938.100912, -------------- SYN_REPORT ------------ (II) config/udev: Adding input device Logitech USB Receiver (/dev/input/mouse1) (**) Logitech USB Receiver: Applying InputClass "system-keyboard" (**) Logitech USB Receiver: Applying InputClass "Logitech USB TrackBall" (**) Logitech USB Receiver: Applying InputClass "Logitech M570 Trackball" (II) Using input driver 'evdev' for 'Logitech USB Receiver' (**) Option "SendCoreEvents" "true" (**) Logitech USB Receiver: always reports core events (**) evdev: Logitech USB Receiver: Device: "/dev/input/by-id/usb-Logitech_USB_Receiver-if02-event-mouse" (WW) evdev: Logitech USB Receiver: device file is duplicate. Ignoring. (EE) PreInit returned 8 for "Logitech USB Receiver" (II) UnloadModule: "evdev" (II) config/udev: Adding input device Logitech USB Receiver (/dev/input/event11) (**) Logitech USB Receiver: Applying InputClass "evdev keyboard catchall" (**) Logitech USB Receiver: Applying InputClass "system-keyboard" (**) Logitech USB Receiver: Applying InputClass "evdev keyboard catchall" (**) Logitech USB Receiver: Applying InputClass "libinput keyboard catchall" (II) Using input driver 'libinput' for 'Logitech USB Receiver' (**) Logitech USB Receiver: always reports core events (**) Option "Device" "/dev/input/event11" (**) Option "_source" "server/udev" (II) event11 - (II) Logitech USB Receiver: (II) is tagged by udev as: Keyboard (II) event11 - (II) Logitech USB Receiver: (II) device is a keyboard (II) event11 - (II) Logitech USB Receiver: (II) device removed (**) Option "config_info" "udev:/sys/devices/pci0000:00/0000:00:14.0/usb1/1-6/1-6:1.0/0003:046D:C517.000D/input/input18/event11" (II) XINPUT: Adding extended input device "Logitech USB Receiver" (type: KEYBOARD, id 11) (**) Option "xkb_model" "microsoftpro" (**) Option "xkb_layout" "gb" (**) Option "xkb_options" "terminate:ctrl_alt_bksp" (II) event11 - (II) Logitech USB Receiver: (II) is tagged by udev as: Keyboard (II) event11 - (II) Logitech USB Receiver: (II) device is a keyboard (II) config/udev: Adding input device Logitech USB Receiver (/dev/input/event12) (**) Logitech USB Receiver: Applying InputClass "evdev pointer catchall" (**) Logitech USB Receiver: Applying InputClass "evdev keyboard catchall" (**) Logitech USB Receiver: Applying InputClass "system-keyboard" (**) Logitech USB Receiver: Applying InputClass "evdev pointer catchall" (**) Logitech USB Receiver: Applying InputClass "evdev keyboard catchall" (**) Logitech USB Receiver: Applying InputClass "Logitech USB TrackBall" (**) Logitech USB Receiver: Applying InputClass "Logitech M570 Trackball" (**) Logitech USB Receiver: Applying InputClass "libinput pointer catchall" (**) Logitech USB Receiver: Applying InputClass "libinput keyboard catchall" (II) Using input driver 'libinput' for 'Logitech USB Receiver' (**) Option "SendCoreEvents" "true" (**) Logitech USB Receiver: always reports core events (**) Option "Device" "/dev/input/by-id/usb-Logitech_USB_Receiver-if02-event-mouse" (**) Option "_source" "server/udev" (EE) Failed to look up path '/dev/input/event13' (II) event13: opening input device '/dev/input/event13' failed (No such device). (II) event13 - failed to create input device '/dev/input/event13'. (EE) libinput: Logitech USB Receiver: Failed to create a device for /dev/input/by-id/usb-Logitech_USB_Receiver-if02-event-mouse (EE) PreInit returned 2 for "Logitech USB Receiver" (II) UnloadModule: "libinput" I do have an M570 trackball as well that the receiver is obviously connecting to although it is already linked via a universal receiver.
Partial answer: How to get more information 1) Update question with out of lsusb so we can see the vendor and device id. 2) Update question with dmesg output when the combo is recognized. Unplug and replug the dongle to force re-recognition if you can't find it in the boot messages. 3) Run evtest as root on the mouse input device to see (a) what events it claims to produce (b) what actual events it produces when you press the additional keys. Update question with that output. 4) Look into /var/log/Xorg.0.log to see as what device the evdev driver it recognizes. Update question with relevant lines. That should allow at least to pinpoint the reason why the device gets recognized as mouse. Edit I don't understand how the Logitech driver is supposed to work, but what happens is that the second device seems indeed to be reserved for extra keys and for the mouse (EV_REL) events, so maybe it's some kind of catchall thing. From the kernel side that makes no difference, all the kernel knows is that it translates USB HID events to input events. And udev does symlinks with misleading names, but that doesn't matter, either. What matters is that X seems to decide that the second input device is duplicate (maybe because it has the same name). So I'd try to make an xorg.conf with an InputClass section in it, and play around with various options in the hope to get X to accept the device. I'm not sure why X rejects it, so I can't give step-by-step instructions. See man xorg.conf about options for InputClass, and google a bit to understand what they do if the description is not sufficient, there's plenty of guides. Besides checking the X log, also have a look at what devices xinput lists. It's enough to make it show up in this list, even if it shows up as mouse - you can reassign it to the Virtual core keyboard. And it will probably get detected as mouse, because X thinks (probably correctly in most cases) that something with EV_REL events must be as mouse, even if it has additional EV_KEY buttons.
My keyboard identifies as a mouse
1,513,762,571,000
Recetly I tried to fix my touchpad lags with firmware update, but it crushed my whole touchpad. Now movement is inverted, and right click doesn’t work. My touchpad is ELAN1200 04F3:304E, one of the worst supported touchpad’s ever. However, I still have a hope. I know that touchpad is being recognized as I2C-HID device, and if I could upgrade it’s firmware with a programm, it’s possible to read data and write data to toucpad’s chip. So i’m trying to lookup for i2c devices connected but have no luck with i2cdetect -l. My lsusb doesn’t show touchpad either: $ lsusb Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 005: ID 0b05:1869 ASUSTek Computer, Inc. Bus 001 Device 004: ID 13d3:5666 IMC Networks Bus 001 Device 003: ID 8087:0a2b Intel Corp. Bus 001 Device 002: ID 09da:7dc8 A4Tech Co., Ltd. Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Though xinput recognizes it: $ xinput ⎡ Virtual core pointer id=2 [master pointer (3)] ⎜ ↳ Virtual core XTEST pointer id=4 [slave pointer (2)] ⎜ ↳ COMPANY USB Device id=13 [slave pointer (2)] ⎜ ↳ COMPANY USB Device Consumer Control id=16 [slave pointer (2)] ⎜ ↳ ITE Tech. Inc. ITE Device(8910) Consumer Control id=19 [slave pointer (2)] ⎜ ↳ ELAN1200:00 04F3:304E Touchpad id=22 [slave pointer (2)] ⎣ Virtual core keyboard id=3 [master keyboard (2)] ↳ Virtual core XTEST keyboard id=5 [slave keyboard (3)] ↳ Power Button id=6 [slave keyboard (3)] ↳ Asus Wireless Radio Control id=7 [slave keyboard (3)] ↳ Video Bus id=8 [slave keyboard (3)] ↳ Video Bus id=9 [slave keyboard (3)] ↳ Power Button id=10 [slave keyboard (3)] ↳ Sleep Button id=11 [slave keyboard (3)] ↳ COMPANY USB Device id=12 [slave keyboard (3)] ↳ COMPANY USB Device Keyboard id=14 [slave keyboard (3)] ↳ COMPANY USB Device System Control id=15 [slave keyboard (3)] ↳ USB2.0 HD UVC WebCam: USB2.0 HD id=17 [slave keyboard (3)] ↳ ITE Tech. Inc. ITE Device(8910) Keyboard id=18 [slave keyboard (3)] ↳ ITE Tech. Inc. ITE Device(8910) Wireless Radio Control id=20 [slave keyboard (3)] ↳ ITE Tech. Inc. ITE Device(8910) System Control id=21 [slave keyboard (3)] ↳ Asus WMI hotkeys id=23 [slave keyboard (3)] ↳ AT Translated Set 2 keyboard id=24 [slave keyboard (3)] ↳ COMPANY USB Device Consumer Control id=25 [slave keyboard (3)] ↳ ITE Tech. Inc. ITE Device(8910) Consumer Control id=26 [slave keyboard (3)] The program I used to upgrade touchpad's firmware is here https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1653456/comments/161 I'm interested in copying data from working touchpad chip and pasting it in mine. How do I do it?
For everyone seeking for the answer, I reached to ELANTech and they provided me the firmware. If anyone ever needs it, feel free to write me [email protected]
I2C_HID touchpad chip data reading
1,513,762,571,000
I've done some work backporting the kernel modules for hid-apple and bcm5974 (with lots of help from SicVolo) and writing DKMS scripts for them so I can maintain compatibility across kernel upgrades: rfkrocktk/hid-apple-3.19 rfkrocktk/bcm5974-3.19 The patches are pretty straightforward, they just add support for these new USB product ids. The problem I'm having is that even after installing these new kernel modules using DKMS, my devices are never bound to the right drivers, they're always bound to usbhid and then to hid-generic, where they should be going to be bound by hid-apple and bcm5974 for the keyboard and trackpad respectively. The changes are really simple and as far as I can tell, they should tell the kernel enough to bind the right devices to the right drivers. Is there a step I'm missing in order to tell the kernel that it really should bind these devices to these drivers? Am I installing the modules in the wrong place in DKMS? If I go through the hassle of rebinding the devices to the right drivers (ie: locate, lookup, unbind, bind), they work great and the patches are functioning as expected. But how do I get the kernel to bind things the right way by default?
My problem was that I was installing the packages into the wrong directories in DKMS. It's important to set DEST_MODULE_LOCATION to point to the directory within the kernel drivers in which your module is supposed to live. I was installing into /updates, but this was the wrong place. I had to move it to /kernel/drivers/hid to get it recognized. The weird thing is that DKMS seems to still install the driver into /extras no matter what you pass here, but somehow this fixes it.
Kernel not recognizing new devices from DKMS module?
1,453,743,545,000
On my Arch install, /etc/bash.bashrc and /etc/skel/.bashrc contain these lines: # If not running interactively, don't do anything [[ $- != *i* ]] && return On Debian, /etc/bash.bashrc has: # If not running interactively, don't do anything [ -z "$PS1" ] && return And /etc/skel/.bashrc: # If not running interactively, don't do anything case $- in *i*) ;; *) return;; esac According to man bash, however, non-interactive shells don't even read these files: When bash is started non-interactively, to run a shell script, for example, it looks for the variable BASH_ENV in the environment, expands its value if it appears there, and uses the expanded value as the name of a file to read and execute. Bash behaves as if the following commands were executed: if [ -n "$BASH_ENV" ]; then . "$BASH_ENV"; fi but the value of the PATH variable is not used to search for the filename. If I understand correctly, the *.bashrc files will only be read if BASH_ENV is set to point to them. This is something that can't happen by chance and will only occur if someone has explicitly set the variable accordingly. That seems to break the possibility of having scripts source a user's .bashrc automatically by setting BASH_ENV, something that could come in handy. Given that bash will never read these files when run non-interactively unless explicitly told to do so, why do the default *bashrc files disallow it?
This is a question that I was going to post here a few weeks ago. Like terdon, I understood that a .bashrc is only sourced for interactive Bash shells so there should be no need for .bashrc to check if it is running in an interactive shell. Confusingly, all the distributions I use (Ubuntu, RHEL and Cygwin) had some type of check (testing $- or $PS1) to ensure the current shell is interactive. I don’t like cargo cult programming so I set about understanding the purpose of this code in my .bashrc. Bash has a special case for remote shells After researching the issue, I discovered that remote shells are treated differently. While non-interactive Bash shells don’t normally run ~/.bashrc commands at start-up, a special case is made when the shell is Invoked by remote shell daemon: Bash attempts to determine when it is being run with its standard input connected to a network connection, as when executed by the remote shell daemon, usually rshd, or the secure shell daemon sshd. If Bash determines it is being run in this fashion, it reads and executes commands from ~/.bashrc, if that file exists and is readable. It will not do this if invoked as sh. The --norc option may be used to inhibit this behavior, and the --rcfile option may be used to force another file to be read, but neither rshd nor sshd generally invoke the shell with those options or allow them to be specified. Example Insert the following at the start of a remote .bashrc. (If .bashrc is sourced by .profile or .bash_profile, temporarily disable this while testing): echo bashrc fun() { echo functions work } Run the following commands locally: $ ssh remote_host 'echo $- $0' bashrc hBc bash No i in $- indicates that the shell is non-interactive. No leading - in $0 indicates that the shell is not a login shell. Shell functions defined in the remote .bashrc can also be run: $ ssh remote_host fun bashrc functions work I noticed that the ~/.bashrc is only sourced when a command is specified as the argument for ssh. This makes sense: when ssh is used to start a regular login shell, .profile or .bash_profile are run (and .bashrc is only sourced if explicitly done so by one of these files). The main benefit I can see to having .bashrc sourced when running a (non-interactive) remote command is that shell functions can be run. However, most of the commands in a typical .bashrc are only relevant in an interactive shell, e.g., aliases aren’t expanded unless the shell is interactive. Remote file transfers can fail This isn’t usually a problem when rsh or ssh are used to start an interactive login shell or when non-interactive shells are used to run commands. However, it can be a problem for programs such as rcp, scp and sftp that use remote shells for transferring data. It turns out that the remote user’s default shell (like Bash) is implicitly started when using the scp command. There’s no mention of this in the man page – only a mention that scp uses ssh for its data transfer. This has the consequence that if the .bashrc contains any commands that print to standard output, file transfers will fail, e.g, scp fails without error. See also this related Red Hat bug report from 15 years ago, scp breaks when there's an echo command in /etc/bashrc (which was eventually closed as WONTFIX). Why scp and sftp fail SCP (Secure copy) and SFTP (Secure File Transfer Protocol) have their own protocols for the local and remote ends to exchange information about the file(s) being transferred. Any unexpected text from the remote end is (wrongly) interpreted as part of the protocol and the transfer fails. According to a FAQ from the Snail Book What often happens, though, is that there are statements in either the system or per-user shell startup files on the server (.bashrc, .profile, /etc/csh.cshrc, .login, etc.) which output text messages on login, intended to be read by humans (like fortune, echo "Hi there!", etc.). Such code should only produce output on interactive logins, when there is a tty attached to standard input. If it does not make this test, it will insert these text messages where they don't belong: in this case, polluting the protocol stream between scp2/sftp and sftp-server. The reason the shell startup files are relevant at all, is that sshd employs the user's shell when starting any programs on the user's behalf (using e.g. /bin/sh -c "command"). This is a Unix tradition, and has advantages: The user's usual setup (command aliases, environment variables, umask, etc.) are in effect when remote commands are run. The common practice of setting an account's shell to /bin/false to disable it will prevent the owner from running any commands, should authentication still accidentally succeed for some reason. SCP protocol details For those interested in the details of how SCP works, I found interesting information in How the SCP protocol works which includes details on Running scp with talkative shell profiles on the remote side?: For example, this can happen if you add this to your shell profile on the remote system: echo "" Why it just hangs? That comes from the way how scp in source mode waits for the confirmation of the first protocol message. If it's not binary 0, it expects that it's a notification of a remote problem and waits for more characters to form an error message until the new line arrives. Since you didn't print another new line after the first one, your local scp just stays in a loop, blocked on read(2). In the meantime, after the shell profile was processed on the remote side, scp in sink mode was started, which also blocks on read(2), waiting for a binary zero denoting the start of the data transfer. Conclusion / TLDR Most of the statements in a typical .bashrc are only useful for an interactive shell – not when running remote commands with rsh or ssh. In most such situations, setting shell variables, aliases and defining functions isn’t desired – and printing any text to standard out is actively harmful if transferring files using programs such as scp or sftp. Exiting after verifying that the current shell is non-interactive is the safest behaviour for .bashrc.
Why does bashrc check whether the current shell is interactive?
1,453,743,545,000
I have a cron job that is running a script. When I run the script via an interactive shell (ssh'ed to bash) it works fine. When the script runs by itself via cron it fails. My guess is that it is using some of the environmental variables set in the interactive shell. I'm going to troubleshoot the script and remove these. After I make changes, I know I could queue up the script in cron to have it run as it would normally, but is there a way I can run the script from the command line, but tell it to run as it would from cron - i.e. in a non-interactive environment?
The main differences between running a command from cron and running on the command line are: cron is probably using a different shell (generally /bin/sh); cron is definitely running in a small environment (which ones depends on the cron implementation, so check the cron(8) or crontab(5) man page; generally there's just HOME, perhaps SHELL, perhaps LOGNAME, perhaps USER, and a small PATH); cron treats the % character specially (it is turned into a newline); cron jobs run without a terminal or graphical environment. The following invocation will run the shell snippet pretty much as if it was invoked from cron. I assume the snippet doesn't contain the characters ' or %. env - HOME="$HOME" USER="$USER" PATH=/usr/bin:/bin /bin/sh -c 'shell snippet' </dev/null >job.log 2>&1 See also executing a sh script from the cron, which might help solve your problem.
Run script in a non interactive shell?
1,453,743,545,000
Since upgrading to Python 3.4, all interactive commands are logged to ~/.python_history. I don't want Python to create or write to this file. Creating a symlink to /dev/null does not work, Python removes the file and recreates it. The documentation suggests to delete the sys.__interactivehook__, but this also removes tab-completion. What should be done to disable writing this history file but still preserve tab-completion? Additional details: Distro: Arch Linux x86_64 readline 6.3-3 python 3.4.0-2
To prevent Python from writing ~/.python_history, disable the hook that activates this functionality: import sys # Disable history (...but also auto-completion :/ ) if hasattr(sys, '__interactivehook__'): del sys.__interactivehook__ If you would like to enable tab-completion and disable the history feature, you can adapt the site.enablerlcompleter code. Write the following code to ~/.pythonrc and set export PYTHONSTARTUP=~/.pythonrc in your shell to enable it. import sys def register_readline_completion(): # rlcompleter must be loaded for Python-specific completion try: import readline, rlcompleter except ImportError: return # Enable tab-completion readline_doc = getattr(readline, '__doc__', '') if readline_doc is not None and 'libedit' in readline_doc: readline.parse_and_bind('bind ^I rl_complete') else: readline.parse_and_bind('tab: complete') sys.__interactivehook__ = register_readline_completion
How can I disable the new history feature in Python 3.4?
1,453,743,545,000
I want to create a script that runs when a Zsh instance starts, but only if the instance is: Non-login. Interactive I think I'm right to say .zshrc runs for all interactive shell instances, .zprofile and .zlogin run for all login shells, and .zshenv runs in all cases. The reason I want to do this is to check if there is an existing ssh-agent running, and make use of it in the newly opened shell if there is. I imagine any tests carried out would be best placed in .zshrc (as this guarantees an interactive shell) and the designated "non-login event" script called from there. I probably first want to check if the new shell is already running as part of an existing remote SSH session before testing for the ssh-agent, but I have found this SE recipe for this purpose. I pick Zsh as it is the shell I favor, but I imagine any correct technique to do this would apply similarly to other shells.
if [[ -o login ]]; then echo "I'm a login shell" fi if [[ -o interactive ]]; then echo "I'm interactive" fi [[ -o the-option ]] returns true if the-option is set. You can also get the values of options with the $options special associative array, or by running set -o. To check if there's an ssh-agent: if [[ -w $SSH_AUTH_SOCK ]]; then echo "there's one" fi In ksh (and zsh): case $- in (*i*) echo interactive; esac case $- in (*l*) echo login; esac In bash, it's a mess, you need: case $- in *i*) echo interactive; esac # that should work in any Bourne/POSIX shell case :$BASHOPTS: in (*:login_shell:*) echo login; esac And $SHELLOPTS contains some more options. Some options you can set with set -<x>, some with set -o option, some with shopt -s option.
How would I detect a non-login shell? (In Zsh)
1,453,743,545,000
I want to search and replace some text in a large set of files excluding some instances. For each line, I want a prompt asking me if I need to replace that line or not. Something similar to vim's :%s/from/to/gc (with the c to prompt for confirmation), but across a set of folders. Is there some good command line tool or script that can be used?
Why not use vim? Open all files in vim vim $(find . -type f) Or open only relevant files (as suggested by Caleb) vim $(grep 'from' . -Rl) And do then run the replace in all buffers :bufdo %s/from/to/gc | update You can also do it with sed, but my sed knowledge is limited.
How to do a text replacement in a big folder hierarchy?