date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,499,090,140,000
I'm scripting a special program to my company. By using Inotifywait from inotify-tools, I'm watching a specific folder for new items, and as soon a new file appears, it will be encrypted with gpg and moved to another folder for further treatment. For a single file, it works fine, but I noticed a problem: When a new file enters while another one is being processed, he is ignored and intotifywait don't treat it, so he stays stucked in the folder. Is there any way to handle multiple files at the same time? Here is the code I have so far: origin=/BRIO/QPC/conclu01/Criptografar output=/BRIO/QPC/conclu01/GPG finished=/BRIO/QPC/conclu01/Concluido while true; do inotifywait -e create -e moved_to -e close_write -e moved_from $origin --exclude ".*(\.filepart|gpg|sh)" | while read dir event file do echo $event if [ "$event" == 'CLOSE_WRITE,CLOSE' ] || [ "$event" == 'MOVED_TO' ] || [ "$event" == 'CREATE' ] then echo "Found the file $origin/$file, starting GPG" sleep 5 gpg --encrypt --recipient Lucas --output "$output/$file.gpg" "$origin/$file" echo "The file $file was succesfully encrypted to $output/$file.gpg" mv -f "$origin/$file" $finished echo "The file $origin/$file was moved" fi done done
Don't run inotifywait repeatedly, run it once in the monitor mode and read from its output: inotifywait -m ... | while read dir event file ; do ... done
Use Inotifywait to handle multiple files at the same time
1,499,090,140,000
Allmost all documentation I've found about debian package signing omit the topic of crypto algorithms entirely, and the few I've seen touching the topic mention only RSA, and in one case, DSA. gpg seems to be at the root of all the debian package signing. gpg has had e.g. elliptic curves since 2011, and the 2014 release 2.1.0 made it official. Yet there seems to be no mention of it getting used. Questions: Can you use e.g. EdDSA (Ed25519) key pairs for signing a debian package? Will this work anywhere the installed version of gpg is at least 2.1.0 from 2014, or are there additional restrictions? Are there any algorithms supported by gpg that should not be used for debian package signing, and if so, which ones, and why?
Assuming you're referring to the signing of APT repositories, which is usual, and not to signing individual packages, which is not, then the answer is that APT uses gpgv, and therefore supports all the algorithms that that binary does. Since gpgv is part of GnuPG, it should support all the relevant algorithms the equivalent version of GnuPG does. Therefore, assuming you have a suitable version of gpgv, you can indeed use EdDSA algorithms. That appears to be new in GnuPG 2.1.0, so that should be stretch or newer. Note that the dependencies do permit the use of v1 of gpgv, which wouldn't support that, but that would be a bizarre configuration. In general, you should aim for at least a 128-bit security level. That means that if you're using RSA or DSA, it should be at least a 3072 bit key (which is the max for DSA), or you should use a 256-bit or larger elliptic curve. Unlike SSH, where DSA is practically limited to 1024 bits and is therefore insecure, DSA is not insecure in OpenPGP, but it has fallen out of favor in the cryptographic community because it is slower than most EC algorithms. You should also be sure that your signatures are made with SHA-256 or SHA-512. SHA-1, which used to be the default, is insecure for signatures and APT will no longer accept it (and you wouldn't want to do so even if it did). RSA is currently the default for historical reasons and the fact that EdDSA hasn't been standardized as part of OpenPGP yet. However, unless you need to interoperate with other, non-GnuPG implementations, there's no reason not to use EdDSA.
are crypto algorithms other than RSA valid for debian package signing?
1,499,090,140,000
command: gpg -vvv --debug-all --recv-keys A8BD96F8FD24E96B60232807B3B4C3CECC10C662 output: gpg: Note: no default option file '/home/user/.gnupg/gpg.conf' gpg: using character set 'utf-8' gpg: enabled debug flags: packet mpi crypto filter iobuf memory cache memstat trust hashing ipc clock lookup extprog gpg: DBG: [not enabled in the source] start gpg: DBG: chan_3 <- # Home: /home/user/.gnupg gpg: DBG: chan_3 <- # Config: /home/user/.gnupg/dirmngr.conf gpg: DBG: chan_3 <- OK Dirmngr 2.2.4 at your service gpg: DBG: connection to the dirmngr established gpg: DBG: chan_3 -> GETINFO version gpg: DBG: chan_3 <- D 2.2.4 gpg: DBG: chan_3 <- OK gpg: DBG: chan_3 -> KS_GET -- 0xA8BD96F8FD24E96B60232807B3B4C3CECC10C662 gpg: DBG: chan_3 <- ERR 167772339 Not enabled <Dirmngr> gpg: keyserver receive failed: Not enabled gpg: DBG: chan_3 -> BYE gpg: DBG: [not enabled in the source] stop gpg: keydb: handles=0 locks=0 parse=0 get=0 gpg: build=0 update=0 insert=0 delete=0 gpg: reset=0 found=0 not=0 cache=0 not=0 gpg: kid_not_found_cache: count=0 peak=0 flushes=0 gpg: sig_cache: total=0 cached=0 good=0 bad=0 gpg: random usage: poolsize=600 mixed=0 polls=0/0 added=0/0 outmix=0 getlvl1=0/0 getlvl2=0/0 gpg: rndjent stat: collector=0x0000000000000000 calls=0 bytes=0 gpg: secmem usage: 0/65536 bytes in 0 blocks
I was struggling with this too for a long time. Then I found in the manual for dirmngr: --standard-resolver This option forces the use of the system's standard DNS resolver code. This is mainly used for debugging. Note that on Windows a standard resolver is not used and all DNS access will return the error ``Not Implemented'' if this option is used. Using this together with enabled Tor mode returns the error ``Not Enabled''. So it could be that you have in the file ~/.gnupg/dirmngr.conf a line with standard-resolver. If you have that, try removing it. Also kill the process dirmngr after every change of this file. That didn't work for me, since dirmngr does something weird with DNS resolving that only works on Linux. The next step to try would be to try to change the option to recursive-resolver. This also didn't work for me, it gave me errors like ERR 167772360 Buffer too short <Dirmngr>. As a last ditch attempt I added the option no-use-tor a the start of dirmngr.conf, and this finally worked for me. Later, it turned out that an ssh "DynamicForwarding" option on port 9050 confused dirmngr into thinking that Tor is in use.
gpg recv-keys error: DBG: Not enabled <Dirmngr>, keyserver receive failed: Not enabled
1,499,090,140,000
I want to get rid of two first lines generated after the usage of gpg -d file.txt.gpg, meaning that only text itself would be left. I tied to use --no-comment, but it seems to not work. gpg: encrypted with 2048-bit RSA key, ID 4FXXXXXXXXD30D52, created 2020-01-22 "test test <[email protected]>" test test444
gpg --quiet -d file.txt.gpg (or -q)
GPG - remove header from decrypted text
1,499,090,140,000
I've been using a ssh key for a while by opening it using gpg agent. I do remember the gpg agent password, but I don't remember the ssh key. How could I recover the ssh key from the gpg agent?
gpg-agent emulates ssh-agent. Auth requests are sent to the agent, and agent returns the authentication. You can't retrieve any private key but only public keys from the agent. It is designed in this way on purpose for security. The agent will never send a private key over its request channel. Instead, operations that require a private key will be performed by the agent, and the result will be returned to the requester. This way, private keys are not exposed to clients using the agent. If you want to get back your public key, you can interact with the gpg agent with ssh-add just as what you did with ssh agent. # list public keys from the agent ssh-add -L Update: detail about how key challenges work. When you connect to a server with SSH, the server doesn't directly ask you for the private key and passphrase to do the authentication, because sending them over the net is insecure. Instead, when the server wants to authenticate you're who you claim to be, it sends you a request calculated with your public key. To complete the auth, you have to calculate a response with the private key and the request, and send the response back to the requester. The gpg agent is an agent for you to handle the response. It does store the private key and passphrase. But it's designed in a way such that you can't retrieve them out of the agent.
Recover lost ssh key still registered in gpg agent
1,499,090,140,000
Under Debian 8 I created (presumably, then with gpg 1 or 2.0) and published my key secring.gpg to a keyserver, the file is still under the directory ~/.gnupg/. But now with gpg 2.1: gpg --list-secret-keys has no output and attempts to sign something gpg -s tmp.txt fail with gpg: no default secret key: secret key not available gpg: signing failed: secret key not available Did I botch somehow the upgrade from Debian 8 to 9? Should I have exported the secret key before upgrading and how can I import the "old" secret key into the "new" gpg? update 2018-03-01: The problem is simply that - somehow, sometime - my secring.gpg shrank to the size 0! :-/ Discovered it with the solution from Stephen Kitt, when trying to import gpg --import-secret-keys secring.gpg I got a message that my file contains no valid data.
GnuPG 2.1 no longer uses ~/.gnupg/secring.gpg; instead, it uses separate files in ~/.gnupg/private-keys-v1.d`, with the help of its agent. There should have been an automatic migration at some point, however there are a number of scenarios where that misses some information (including the case where a private key is added using GnuPG 1 after the 2.1 migration). To resolve the issue, you should import your secret keyring: gpg --import ~/.gnupg/secring.gpg You’ll find more useful information in this handy GnuPG 2.1 migration guide, and in the release notes. (I think it’s also worth mentioning that your secret keyring should never be exported to a public server, and that GnuPG itself will try to prevent you from doing so; thus you can’t rely on external sources of information as backups of your secret keys.)
gpg on Debian 9.3 isn't finding any private keys only public ones
1,499,090,140,000
I'm completely new to GnuPGP and I'm trying to test it. Basically, what I've done is downloaded the Linux Mint key (ID 0FF405B2) from pgp.mit.edu. gpg --keyserver pgp.mit.edu --recv-keys 0FF405B2 I've verified that it's in my keychain, from Clement Lefebvre. After that was done, I download the PGP block from a Linux Mint mirror. (http://mirror.csclub.uwaterloo.ca/linuxmint/stable/17.3/sha256sum.txt.gpg) I essentially copied and pasted it and saved it as lmgpg.sig. I've also tried lmgpg.gpg and lmgpg.txt.gpg and even just wget'd it. After that, I saved one of the hashes: 854d0cfaa9139a898c2a22aa505b919ddde34f93b04a831b3f030ffe4e25a8e3 linuxmint-17.3-cinnamon-64bit.iso as lmsum.txt. So, when all that's done with, I try to verify the file with the hash in it and even the ISO itself. With both, I get: gpg --verify lmgpg.sig lmsum.txt gpg: Signature made Wed 06 Jan 2016 08:06:20 AM PST using DSA key ID 0FF405B2 gpg: BAD signature from "Clement Lefebvre (Linux Mint Package Repository v1) <[email protected]>" This also happens when I repeat the above operations with files from a Debian stable mirror. Please, what on Earth am I doing wrong?
The various checksum and signature files allow you to verify the files you downloaded, not files you re-create yourself. So you download the ISO image and the verification files wget http://mirror.csclub.uwaterloo.ca/linuxmint/stable/17.3/linuxmint-17.3-cinnamon-64bit.iso wget http://mirror.csclub.uwaterloo.ca/linuxmint/stable/17.3/sha256sum.txt{.gpg,} and pull the GPG key into your keychain as you did, then verify the files: sha256sum -c sha256sum.txt which complains about missing files, but verifies the ISO you downloaded, and gpg --verify sha256sum.txt.gpg sha256sum.txt which should tell you that the signature is good. The signature is only valid for the exact file which was signed; you can't create part of it using sha256sum and have it verify that. The whole point of this exercise is to verify that the ISO is correct, according to sha256sum, and that the SHA-256 checksum is itself correct, according to the GnuPG signature; crucially the last part relies on Clement Lefebvre's Linux Mint key which you downloaded separately from a different source.
Trying to verify file integrity with GnuPG. "BAD signature" all the time
1,499,090,140,000
I want to check whether the passphrase of my user-id located inside a file is correct or not. I have stored my passphrase in a file (/home/user/.gpg_pass.txt), than I use it as: gpg --verbose --batch --yes --pinentry-mode loopback \ --passphrase-file=/home/user/.gpg_pass.txt --decrypt <file> Before using this command, I want to verify that the passphrase inside the file is correctly entered. I have tried, which did not help: cat /home/user/.gpg_pass.txt | gpg --dry-run --passwd <key_id> From man of gpg: --passwd user-id Change the passphrase of the secret key belonging to the certificate specified as user-id. This is a shortcut for the sub-command passwd of the edit key menu. When using together with the option --dry-run this will not actually change the passphrase but check that the current passphrase is correct. When I enter: $ gpg --dry-run --passwd <key_id> Two times following window show up I enter the passphrase, (if wrong passphrase is entered it says Bad Passphrase (try 2 of 3) in the GUI-console): ┌────────────────────────────────────────────────────────────────┐ │ Please enter the passphrase to unlock the OpenPGP secret key: │ │ "Alper <[email protected]>" │ │ 3072-bit RSA key, ID 86B9E988681A51D1, │ │ created 2021-12-15. │ │ │ │ │ │ Passphrase: __________________________________________________ │ │ │ │ <OK> <Cancel> │ └────────────────────────────────────────────────────────────────┘ Instead of manually entering passphrase into GUI inside console, can it be pipe in the gpg --dry-run --passwd <key_id> and can its output could be returned, verifying is the given passphrase correct or not? Related: https://stackoverflow.com/q/11381123/2402577
Try gpg --batch --pinentry-mode loopback --passphrase-file=/home/user/.gpg_pass.txt --dry-run --passwd your-keyid as the man page also says that these are the options to allow to get the password from a file. Note that if you want to do that from inside a script, I'd assume it sets the return code depending on the outcome, so check the return code ($? in most shells, use echo $? if you want to check manually).
How can I check passphrase of gpg from a file?
1,499,090,140,000
Maybe this problem is not good, but I have tried it for two days. Still no success. I want to download the 1.4.13 version of gnupg, but I found that the various websites of gnupg are not accessible. I saw gnupg on github, but when I use git clone -b 1.4.13 https://gitee.com/mirrors/GnuPG.git, it shows that there is no 1.4.13, which is very crashing. I'm doing some experiments on the side channel. In the experiment, I want to use vulnerabilities in the encryption algorithm of gnupg version 1.4.13 to infer data. Other versions fix this problem, so there is no way to use it. The server used is Linux mprc-PowerEdge-R730 4.15.0-122-generic #124~16.04.1-Ubuntu SMP Thu Oct 15 16:08:36 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux Thanks
I want to download the 1.4.13 version of gnupg https://gnupg.org/ftp/gcrypt/gnupg/gnupg-1.4.13.tar.bz2 https://gnupg.org/ftp/gcrypt/gnupg/gnupg-1.4.13.tar.bz2.sig
How to download 1.4.13 version of gnupg in linux server?
1,499,090,140,000
I have a command piping an encrypted gpg stream into curl: echo "Some Text" | gpg -o - | curl --silent -T - \ -X PUT \ --output /dev/null \ ${myurl} \ When running this, I see something like % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0Enter passphrase Passphrase: where the "Enter passphrase" part is attached to the send of the curl output. I am unable to put in newlines due to the piped nature of things. Is there a way to make the curl output run later?
How to prevent curl output from printing a preceding pipe's output? curl does not print the "preceding pipe's output". What happens is curl and gpg print some messages to your terminal. The messages interfere with each other. Remember that piped commands run concurrently. If you expect the output from successful gpg never to be empty then use ifne: … | gpg … | ifne curl … (In Debian ifne is in the moreutils package.) The main purpose of ifne is to prevent a command (here: curl) from running if the stdin is empty. To know if the stdin is really empty, the tool needs to wait for any data or EOF. This delays the actual command. A delay is what you need. I expect gpg to start printing only after it gets the right passphrase. I did not test ifne thoroughly with preceding gpg asking for passphrase, but I did test with sudo … asking for password. If gpg works as I expect then ifne can help you. It will run curl after you provide the passphrase. Note ifne will prevent curl from running if gpg prints nothing to its stdout (e.g. because the passphrase was wrong). This behavior is often desired. However if the output from successful gpg can be empty and still should be piped to curl, then ifne alone is not the right thing. If you want to run curl regardless if gpg prints anything then keep reading. The code below is a reduced version of my report script from this answer. Save it as delay. It uses ifne in a more complicated way. The code must be an executable script (I mean don't try to make it a shell function) because it calls itself via ifne "$0" (and ifne cannot run a function). #!/bin/bash if [ "$1" = "-X" ]; then marker=1 fd="$2" shift 2 else unset marker exec {fd}>&1 fi if [ "$marker" ]; then ( >&"$fd" "$@" ) else ifne -n "$0" -X "$fd" "$@" | ifne "$0" -X "$fd" "$@" fi The purpose of delay is to delay execution of a command until there is data or EOF in the stdin. It was designed to be used after |. You should use it like this: … | gpg … | delay curl … curl will run only after gpg starts printing to its stdout or closes for whatever reason. Some technical details are explained in the already linked answer. The difference between ifne foo, ifne -n foo and delay foo: ifne foo waits for data or EOF and then runs foo iff there is data, ifne -n foo waits for data or EOF and then runs foo iff there is no data, delay foo waits for data or EOF and then runs foo unconditionally.
How to prevent curl output from printing a preceding pipe's output?
1,499,090,140,000
openssl this way can only encrypt small files: openssl rsautl -encrypt -pubin -inkey public_key.pem -in secret.txt -out secret.enc openssl as I found suggested here throws an error: openssl smime -encrypt -aes-256-cbc -binary -in secret.txt -outform DER -out secret.txt.der public_key.pem not that you're supposed to be using smime because that's for mail but still see the Error: unable to load certificate 140222726453056:error:0909006C:PEM routines:get_name:no start line:crypto/pem/pem_lib.c:745:Expecting: TRUSTED CERTIFICATE Why is openssl complaining about a trust certificate? That's my business. I know it means that it wants the PEM to be of a different format rather than taking it personally. What do I want? I want to use bash to encrypt any file with strong encryption using a public PEM file (or other public key) so that my project counterpart can use their private key to decrypt it. It would be awesome if I could use powershell native tools as well but that's a big ask and just thought I'd throw it in, in hope of avoiding gitbash for Windows-homed recipients. I could use gpg but we do not want to introduce generating password generated keys.
Well, the first problem is that you really don't want to encrypt a large file with an asymmetric cipher. Nobody does this, as they are slow, and limited in size in any case. What you do is create a session key for a symmetric cipher, encrypt the session key with the asymmetric cipher, encrypt the large file with the symmetric cipher, and then package things together. This is what both SSL and PGP/GPG do. The rsautil command you found is a low level tool about doing the asymmetric encoding. But you would need several other commands to do the full process. There are two basic solutions. Sign your public key with your private key to create a certificate. This should let openssl smime work. Use GPG. This is what it is designed for. I recommend this for what you've described. Probably gpg -se, or gpg -sea if you are emailing. (Don't do gpg -c, which I think is your "password generated keys".) Using either SSL or GPG, both parties must generate public and private keys, sign the public key with the private key, and send the signed public key to the other party. Generally, the only passwords involved are about encrypting the private key so that somebody who can see your files can't decrypt your data. The major difference between SSL and PGP/GPG is the certification model. SSL uses "Certificate Authorities". PGP/GPG uses a chain of certificates, where Anne signs Bob's key, and Bob signs Carol's key, and thus Anne can trust that Carol's key is belongs to Carol. Finally, if you are emailing the files, you might just look to see if you can get either SMIME or PGP integration in your email client.
How can I use an unsigned public key to encrypt a large file using openssl?
1,499,090,140,000
I have some strange behavior I don't understand. I'm just trying to list some files in a directory: sudo find /home/vsts/work/_temp/tmp.Q8K2bSeNVV/root/home/root produces: /home/vsts/work/_temp/tmp.Q8K2bSeNVV/root/home/root/ /home/vsts/work/_temp/tmp.Q8K2bSeNVV/root/home/root/.gnupg /home/vsts/work/_temp/tmp.Q8K2bSeNVV/root/home/root/.gnupg/trustdb.gpg /home/vsts/work/_temp/tmp.Q8K2bSeNVV/root/home/root/.gnupg/private-keys-v1.d /home/vsts/work/_temp/tmp.Q8K2bSeNVV/root/home/root/.gnupg/S.gpg-agent /home/vsts/work/_temp/tmp.Q8K2bSeNVV/root/home/root/.gnupg/S.gpg-agent.extra /home/vsts/work/_temp/tmp.Q8K2bSeNVV/root/home/root/.gnupg/pubring.kbx~ /home/vsts/work/_temp/tmp.Q8K2bSeNVV/root/home/root/.gnupg/S.gpg-agent.browser /home/vsts/work/_temp/tmp.Q8K2bSeNVV/root/home/root/.gnupg/pubring.kbx /home/vsts/work/_temp/tmp.Q8K2bSeNVV/root/home/root/.gnupg/S.gpg-agent.ssh So I know the .gnupg directory exists, and has files in it. sudo ls -la /home/vsts/work/_temp/tmp.Q8K2bSeNVV/root/home/root produces: total 12 drwx------ 3 root root 4096 Sep 21 14:54 . drwxr-xr-x 3 root root 4096 Aug 24 18:30 .. drwxr-xr-x 3 root root 4096 Sep 21 14:54 .gnupg So the directory itself has rwx permissions. But the command sudo ls -la /home/vsts/work/_temp/tmp.Q8K2bSeNVV/root/home/root/.gnupg/* gives: ls: cannot access '/home/vsts/work/_temp/tmp.Q8K2bSeNVV/root/home/root/.gnupg/*': No such file or directory I've checked the path over and over and can't see anything wrong. I have rwx permissions and root level access. What else could stop me from listing this directory? My ultimate goal is to do a chmod 600 /home/vsts/work/_temp/tmp.Q8K2bSeNVV/root/home/root/.gnupg/*, which also fails. But for now I'd settle for ls. Edit: It just hit me. Does this have to do with the file globbing. Does the * expand before sudo, and therefor without root access?
As you noted correctly, the expansion order is the root of your problem. The step relevant for filename globs is "filename expansion". Although it is rather late in the order of expansions (see here e.g.), it is performed before the command is actually invoked. This means that e.g. ls * in a directory containing file1.txt, file2.txt and file3.txt is actually called as ls file1.txt file2.txt file3.txt. The problem now is that the calling user doesn't have the necessary permissions to enter the .gnupg directory. Hence, the expansion of the * filename glob will fail. In such cases, unless specific shell options are set, the * will remain literal on the command line, so that the shell would try to perform the ls command on a file literally called *. Hence the error message cannot access '/home/vsts/work/_temp/tmp.Q8K2bSeNVV/root/home/root/.gnupg/*' As you see, it tries to access a file called * inside /home/vsts/work/_temp/tmp.Q8K2bSeNVV/root/home/root/.gnupg/. What you can do instead is write a shell script that does the relevant operations, which you then call as root using sudo. You would need to place the script under a path accessible from everywhere, such as /usr/local/bin.
ls shows "no such file", but why? [duplicate]
1,499,090,140,000
I'm new to GPG and I've learned some basic GPG commands recently. I just found that using the same GPG command to encrypt the same data always generates different result. Below is an example. Is is normal? I took a quick look at the GPG man page and found there is a parameter --faked-system-time which seems indicates GPG encryption will use the system time as an input. Is this true? root@EPC:~# echo xxx | gpg -ear [email protected] -----BEGIN PGP MESSAGE----- hQIMAzha1PwyPp1PAQ//byuSt9hlwYAZAjXxC/eychTXbEvA8HnuCDaclczOVd3r FSrKfMPAcyWE+XLWZfQrKJ0gKQIF2lJNFHCXieDYi2AA0pOUKzatUvlccJV7BKk2 mY2LmH6R7rNh5Us+Es/xut03TWmjVFzXtsHRQBazVpn19PyB7ybusWTv05POmLqD nZe2l0uhjfQxtBG/a5leNF8cg5vhun7i0tHL/6y4yYCjEBO8zCl4lmwTaLwfVAPM VoLCtX+3UBmolRsA3zod/Fbo9bFAH7lT1w1nTd0Oq6jnXNQZib81pWtfgCZ1WI9S XfEptr8TtOZenEgY8azXIzORhiV1rXJkmqS1ofIhAn4FhvEljQN3h0buml3pfVyn Q9b/toEBLNS/Vin3+NQP/wzp3iB0ykRrTSVT0BZsfE52do5tqtbSPFPkZoF1ncof u8kRH5ccDXAT0tUqZnyfkvasadtr05yjV+W0A6rwjQ4TE7AdTpICiKrcrLLDd47s 42a4bbm0BVd63uHG7fwBXZ7lsdG+3Mjs+WwEDURVAGUc0qv3dGQf3m7+P3vivTsv dT2I65c0tlyOMjOqSvUzBia153gRz6aNuf0YlvD1l6ULiR7pqkG9Zu+EWXdDWsXE xnhZXxX9Y9kxz3GLtaWmTOFJeWlfzJ07vtE5I2qSZXT8krsc06WthdzHPOOnRWbS QAH+iINECeVZCS88z41su7kHeDaPDHSTS+YRToLL9K+1Y4jSrQ0aCK7qx1reHKC6 NqPDBzVeMHzXtWIylJwgM3E= =d1Ye -----END PGP MESSAGE----- root@EPC:~# echo xxx | gpg -ear [email protected] -----BEGIN PGP MESSAGE----- hQIMAzha1PwyPp1PARAAg+cR4+vLw2uFKWUUuf7j8Yf3WZU7v3Xxw+gT5F/yo3fo dViI7dwW/Q5mq32HSiUxDqDsEULObcyQFr6/B2by+9t6/4SHf2UIYMFd5nZvprIt gcQswXEVw6BLpmutDgg0/letxlFtSON70d8aB/OqoaL3OxQX3b6prw3ZCv/UcoYa BLXm8W24F2donPHos4BYoSWNFKzNq/Z/6LnTkVaF1o8Z7OSIKCnhV3t4vwnaC/os q34AA65f/lrDTgudMo7UoznLlDLK+VeBJaU9772z76uZnm+LVEDPt695kKBlpmBt qQNzBY8ZL2sUQ2aq2RqjWA12dh06r3P8k45RyMKvn6Iubf0rfUK9kHUknQRM7q3A gcQBAO2yR97ZcCAffPLjk3ZGcJ7eh92PRET8d8l2/lAKDQ24GdJENcgHBXhGu8VV FivA9AkGThzRxtghSV1ZuX3yi7MbOSjdrs0hx82ZH73kj34+GYSkz8Q4sHK3KnM8 AZ4MUeTdM9eQET8fQVziLQU+PRxFqNo3DKb0uoqI41/klntc4VgiUOMYsqiESJBs LF0AspsRQFWd6hO9y314tsetoIcK3xPkLq9kX2T0qYuIdgvLoe9i+kE2QxdsamBH 0Rlrm1AcRmIKBkjClGSbMsHsa6uJZD+zfDOpmWa/zgtxWe9tdjH3Er5NlR6VRDLS QAFqDVk3yWTnq2OZotpggz+OhUz0aGKiPWhpvpdQS4yQGfiE0a94drOBEnEpuPLJ kb0j1SLja8v09v4rLmZhhBk= =0COd -----END PGP MESSAGE----- root@EPC:~#
Yes, GnuPG stores the time in the encrypted data. To see this, run echo xxx | gpg -ear [email protected] > encryptedexample (using an id for which you have a private key), then gpg --list-packets --verbose encryptedexample will end with something like :literal data packet: mode b (62), created 1599715776, name="", raw data: 4 bytes where 1599715776 is the timestamp of the packet, in seconds since the epoch.
Same gpg command applied on the same string but got different result
1,499,090,140,000
I have a file that is decrypted with a similar command: gpg --batch --yes -q -d --passphrase-fd 0 -o "/var/file.out" "/var/file.gpg" < /var/secret.key I want to change the content of the /var/file.gpg but the decryption should continue work as before. Any idea how to encrypt it (I was able to find some examples with pass phrases (which I suppose is what the key file is used for) and sender and receiver (which I suppose I don't need) and it was not working so far)
Ok. I've found what I needed: passphrase=$(head -n 1 /var/secret.key) gpg --symmetric --batch --yes --passphrase $passphrase --output some.gpg toEncrypt.txt
How to encrypt file with gpg and a passphrase only?
1,499,090,140,000
I'm on Linux Mint 19.1 Cinnamon. I have a minor problem with identifying the actual up-to-date GPG public key used by apt-get with Spotify music application. I would like to remove the old - deprecated public keys. I would like to do all this from CLI, if possible. I'm unsure where to start, could anyone navigate me in the right direction?
List apt keys as root with spotify string: # apt-key list 2>&1 | grep -i spotify -B 2 Remove, but the newest key with: # apt-key del <keyid> Example output in my case: pub rsa4096 2018-05-23 [SC] [expires: 2019-08-16] 931F F8E7 9F08 7613 4EDD BDCC A87F F9DF 48BF 1C90 uid [ unknown] Spotify Public Repository Signing Key <[email protected]> -- uid [ unknown] Microsoft (Release signing) <[email protected]> /etc/apt/trusted.gpg.d/spotify-2017-07-25-341D9410.gpg -- pub rsa4096 2017-07-25 [SC] [expired: 2018-07-25] 0DF7 31E4 5CE2 4F27 EEEB 1450 EFDC 8610 341D 9410 uid [ expired] Spotify Public Repository Signing Key <[email protected]> /etc/apt/trusted.gpg.d/spotify-2018-05-23-48BF1C90.gpg -- pub rsa4096 2018-05-23 [SC] [expires: 2019-08-16] 931F F8E7 9F08 7613 4EDD BDCC A87F F9DF 48BF 1C90 uid [ unknown] Spotify Public Repository Signing Key <[email protected]> Tip: You can use fingerprint to delete the key: # apt-key del "0DF7 31E4 5CE2 4F27 EEEB 1450 EFDC 8610 341D 9410"
How to find out which public key is actually in use and remove deprecated ones (gpg)?
1,541,010,146,000
I have a problem with gpg2 and signing my commits in git. I should preface all this by saying this all worked yesterday before I did an apt-get update && apt-get upgrade and a reboot. Now when I try to sign my commits I get the following error message: gpg: skipped "3C27FEA3B5758D9E": No secret key gpg: signing failed: No secret key error: gpg failed to sign the data fatal: failed to write commit object Actually, I seem to get it when I try to stash my changes too. When I do a pgrep I can see that gpg-agent is running so I've killed it and restarted it. I have also have this in my .bashrc file: export GPG_TTY=$(tty) Output of gpg2 --list-keys /home/mdhas/.gnupg/pubring.gpg: ------------------------------ pub rsa2048/FBJJJJ1C 2017-10-11 [SC] uid [ultimate] Mark Dhas <[email protected]> sub rsa2048/3FDJJJJJ 2017-10-11 [E] pub rsa2048/BFJJJJJ7 2017-11-17 [SC] uid [ultimate] Mark Dhas <[email protected]> sub rsa2048/DEDDJJJJ 2017-11-17 [E] pub rsa4096/7137JJJJ 2017-10-11 [SC] [expires: 2021-10-11] uid [ unknown] co.co <[email protected]> sub rsa4096/A9BJJJJJ 2017-10-11 [E] [expires: 2021-10-11] pub rsa4096/B57JJJJJ 2018-10-31 [SC] [expires: 2021-10-31] uid [ unknown] Mark Dhas (New Key-Created on 2018-10-31) <[email protected]> sub rsa4096/36FJJJJJ 2018-10-31 [E] [expires: 2021-10-31] Please ignore the JJJJJ's they are an attempt at a small amount of redaction for security purposes. $ gpg2 --list-secret-keys /home/mdhas/.gnupg/pubring.gpg ------------------------------ sec rsa2048/FBJJJJ1C 2017-10-11 [SC] uid [ultimate] Mark Dhas <[email protected]> ssb rsa2048/3FDJJJJJ 2017-10-11 [E] And this is a section of my git config user.name=Mark Dhas [email protected] user.signingkey=3C2JJJJJJJJJJJJJ core.editor=vim gpg.program=/usr/bin/gpg2 Any ideas on how to rectify this issue would be great.
You don't have the private part of your GPG key. A GPG key consists of a public key, the piece of information that other computers can use to verify signatures coming from you, and the private key, the part that is needed to create a signature or decrypt messages sent to you. This is why Git is giving you an error. It can't get the private key to sign the commit. Your only option is to find a backup of the entire key (one that includes the private key), or to create a new key.
gpg2 and git signing
1,541,010,146,000
Duplicity by default asks for passwords to GnuPG keys on start up and caches them in memory throughout process lifetime. Is there a way to have it use /usr/bin/pinentry instead so I know I'm not passing the passphrase through Duplicity? I'm using GnuPG hardware smart cards, so generally when a sign or decrypt operation is requested, I usually get a pinentry dialogue that pops up. I can configure gpg-agent to cache my passphrases for a set amount of time. Is there a way to have Duplicity not know my GnuPG keys and use the GnuPG agent and pinentry to bypass inputting my passphrase into Duplicity?
Duplicity does not cache gpg pass phrases by default (you can give them as env vars though). All prompts you see are from the gpg binary run underneath. Hence, when you configure your gpg into the desired state, duplicity will use it as configured and you are set. For using gpg-agent read what the parameter --use-agent does on the manpage: http://duplicity.nongnu.org/duplicity.1.html
Configure Duplicity to use pinentry?
1,541,010,146,000
I am currently trying to add a new repository to apt-get but it is not working. I'm adding it like this: deb http://volatile.debian.org/debian-volatile wheezy/volatile non-free Then I get this error: Get:11 http://volatile.debian.org wheezy/volatile Release [7626 B] Ign http://volatile.debian.org wheezy/volatile Release E: GPG error: http://volatile.debian.org wheezy/volatile Release: The following signatures were invalid: NODATA 1 NODATA 2 My system is running on Debian 7. How could I fix this?
Volatile was discontinued with Squeeze; the replacement is suite-updates (in your case, wheezy-updates) on the standard mirrors: deb http://ftp.debian.org/debian wheezy-updates main You can add contrib non-free if necessary.
GPG error with http://volatile.debian.org : 'The following signatures were invalid: NODATA 1 NODATA 2'
1,541,010,146,000
Recently I happend to experience a quite unpleasant power outage which caused my system to go down unexpectadly. After power came back the system came back as well and other than a fsck being required all seemed fine. The unedifying surprise hit me when I first tried to access my pass password store - it complained about gpg being broken; so I checked with plain gpg and got this: gpg: failed to create temporary file `/home/meUser/.gnupg/.#lk0x14368b8.meBox.13459': Not a directory gpg: keyblock resource `/home/meUser/.gnupg/secring.gpg': general error gpg: failed to create temporary file `/home/meUser/.gnupg/.#lk0x14379f0.meBox.13459': Not a directory gpg: keyblock resource `/home/meUser/.gnupg/pubring.gpg': general error Does this mean that gnupg is broken on my system now or is this something user specific? I guess my password store is gone, just wondering how to fix gpg to set up a new store.
Your folder /home/meUser/.gnupg is gone. You have to restore it from lost+found or backup.
gnupg broken after power outage
1,541,010,146,000
I'm having a problem building GPG on my system; when I try to run make it fairly well before it suddenly bails out with an error. Here's my latest result from running it: make all-recursive Making all in m4 make[2]: Nothing to be done for `all'. Making all in gl make all-am make[3]: Nothing to be done for `all-am'. Making all in include make[2]: Nothing to be done for `all'. Making all in jnlib make[2]: Nothing to be done for `all'. Making all in common make all-am make[3]: Nothing to be done for `all-am'. Making all in kbx make[2]: Nothing to be done for `all'. Making all in g10 make[2]: Nothing to be done for `all'. Making all in keyserver make[2]: Nothing to be done for `all'. Making all in sm make[2]: Nothing to be done for `all'. Making all in agent make[2]: Nothing to be done for `all'. Making all in scd gcc -DHAVE_CONFIG_H -I. -I.. -I../gl -I../intl -I../common -DLOCALEDIR=\"/usr/local/share/locale\" -DGNUPG_BINDIR="\"/usr/local/bin\"" -DGNUPG_LIBEXECDIR="\"/usr/local/libexec\"" -DGNUPG_LIBDIR="\"/usr/local/lib/gnupg\"" -DGNUPG_DATADIR="\"/usr/local/share/gnupg\"" -DGNUPG_SYSCONFDIR="\"/usr/local/etc/gnupg\"" -g -O2 -Wall -Wno-pointer-sign -Wpointer-arith -MT gnupg_pcsc_wrapper-pcsc-wrapper.o -MD -MP -MF .deps/gnupg_pcsc_wrapper-pcsc-wrapper.Tpo -c -o gnupg_pcsc_wrapper-pcsc-wrapper.o `test -f 'pcsc-wrapper.c' || echo './'`pcsc-wrapper.c pcsc-wrapper.c:69: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘int’ pcsc-wrapper.c:129: error: expected specifier-qualifier-list before ‘pcsc_dword_t’ pcsc-wrapper.c:149: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘pcsc_protocol’ pcsc-wrapper.c:153: error: expected ‘)’ before ‘scope’ pcsc-wrapper.c:160: error: expected declaration specifiers or ‘...’ before ‘pcsc_dword_t’ pcsc-wrapper.c:162: error: expected declaration specifiers or ‘...’ before ‘pcsc_dword_t’ pcsc-wrapper.c:164: error: expected declaration specifiers or ‘...’ before ‘pcsc_dword_t’ pcsc-wrapper.c:167: error: expected declaration specifiers or ‘...’ before ‘pcsc_dword_t’ pcsc-wrapper.c:168: error: expected declaration specifiers or ‘...’ before ‘pcsc_dword_t’ pcsc-wrapper.c:170: error: expected declaration specifiers or ‘...’ before ‘pcsc_dword_t’ pcsc-wrapper.c:172: error: expected declaration specifiers or ‘...’ before ‘pcsc_dword_t’ pcsc-wrapper.c:173: error: expected declaration specifiers or ‘...’ before ‘pcsc_dword_t’ pcsc-wrapper.c:174: error: expected declaration specifiers or ‘...’ before ‘pcsc_dword_t’ pcsc-wrapper.c:175: error: expected declaration specifiers or ‘...’ before ‘pcsc_dword_t’ pcsc-wrapper.c:177: error: expected declaration specifiers or ‘...’ before ‘pcsc_dword_t’ pcsc-wrapper.c:179: error: expected declaration specifiers or ‘...’ before ‘pcsc_dword_t’ pcsc-wrapper.c:180: error: expected declaration specifiers or ‘...’ before ‘pcsc_dword_t’ pcsc-wrapper.c:181: error: expected declaration specifiers or ‘...’ before ‘pcsc_dword_t’ pcsc-wrapper.c:182: error: expected declaration specifiers or ‘...’ before ‘pcsc_dword_t’ pcsc-wrapper.c:185: error: expected declaration specifiers or ‘...’ before ‘pcsc_dword_t’ pcsc-wrapper.c:189: error: expected declaration specifiers or ‘...’ before ‘pcsc_dword_t’ pcsc-wrapper.c:192: error: expected declaration specifiers or ‘...’ before ‘pcsc_dword_t’ pcsc-wrapper.c:194: error: expected declaration specifiers or ‘...’ before ‘pcsc_dword_t’ pcsc-wrapper.c:196: error: expected declaration specifiers or ‘...’ before ‘pcsc_dword_t’ pcsc-wrapper.c:198: error: expected declaration specifiers or ‘...’ before ‘pcsc_dword_t’ pcsc-wrapper.c:200: error: expected declaration specifiers or ‘...’ before ‘pcsc_dword_t’ pcsc-wrapper.c:201: error: expected declaration specifiers or ‘...’ before ‘pcsc_dword_t’ pcsc-wrapper.c: In function ‘load_pcsc_driver’: pcsc-wrapper.c:347: error: ‘pcsc_establish_context’ undeclared (first use in this function) pcsc-wrapper.c:347: error: (Each undeclared identifier is reported only once pcsc-wrapper.c:347: error: for each function it appears in.) pcsc-wrapper.c: In function ‘handle_open’: pcsc-wrapper.c:411: error: ‘pcsc_dword_t’ undeclared (first use in this function) pcsc-wrapper.c:411: error: expected ‘;’ before ‘nreader’ pcsc-wrapper.c:413: error: expected ‘;’ before ‘card_state’ pcsc-wrapper.c:428: warning: implicit declaration of function ‘pcsc_establish_context’ pcsc-wrapper.c:437: error: ‘nreader’ undeclared (first use in this function) pcsc-wrapper.c:437: error: too many arguments to function ‘pcsc_list_readers’ pcsc-wrapper.c:446: error: too many arguments to function ‘pcsc_list_readers’ pcsc-wrapper.c:487: error: ‘pcsc_protocol’ undeclared (first use in this function) pcsc-wrapper.c:487: warning: passing argument 3 of ‘pcsc_connect’ makes pointer from integer without a cast pcsc-wrapper.c:487: error: too many arguments to function ‘pcsc_connect’ pcsc-wrapper.c:509: error: expected ‘;’ before ‘readerlen’ pcsc-wrapper.c:511: error: ‘atrlen’ undeclared (first use in this function) pcsc-wrapper.c:512: error: ‘readerlen’ undeclared (first use in this function) pcsc-wrapper.c:515: error: ‘card_state’ undeclared (first use in this function) pcsc-wrapper.c:515: error: ‘card_protocol’ undeclared (first use in this function) pcsc-wrapper.c:516: error: too many arguments to function ‘pcsc_status’ pcsc-wrapper.c: In function ‘handle_close’: pcsc-wrapper.c:558: error: ‘pcsc_protocol’ undeclared (first use in this function) pcsc-wrapper.c: In function ‘handle_status’: pcsc-wrapper.c:587: error: ‘struct pcsc_readerstate_s’ has no member named ‘current_state’ pcsc-wrapper.c:590: error: too many arguments to function ‘pcsc_get_status_change’ pcsc-wrapper.c:602: error: ‘struct pcsc_readerstate_s’ has no member named ‘event_state’ pcsc-wrapper.c:604: error: ‘struct pcsc_readerstate_s’ has no member named ‘event_state’ pcsc-wrapper.c:607: error: ‘struct pcsc_readerstate_s’ has no member named ‘event_state’ pcsc-wrapper.c:614: error: ‘struct pcsc_readerstate_s’ has no member named ‘event_state’ pcsc-wrapper.c:624: error: ‘struct pcsc_readerstate_s’ has no member named ‘event_state’ pcsc-wrapper.c:625: error: ‘struct pcsc_readerstate_s’ has no member named ‘event_state’ pcsc-wrapper.c:626: error: ‘struct pcsc_readerstate_s’ has no member named ‘event_state’ pcsc-wrapper.c:627: error: ‘struct pcsc_readerstate_s’ has no member named ‘event_state’ pcsc-wrapper.c:629: error: ‘pcsc_protocol’ undeclared (first use in this function) pcsc-wrapper.c: In function ‘handle_reset’: pcsc-wrapper.c:645: error: ‘pcsc_dword_t’ undeclared (first use in this function) pcsc-wrapper.c:645: error: expected ‘;’ before ‘nreader’ pcsc-wrapper.c:646: error: expected ‘;’ before ‘card_state’ pcsc-wrapper.c:660: error: too many arguments to function ‘pcsc_disconnect’ pcsc-wrapper.c:678: error: ‘pcsc_protocol’ undeclared (first use in this function) pcsc-wrapper.c:678: warning: passing argument 3 of ‘pcsc_connect’ makes pointer from integer without a cast pcsc-wrapper.c:678: error: too many arguments to function ‘pcsc_connect’ pcsc-wrapper.c:689: error: ‘atrlen’ undeclared (first use in this function) pcsc-wrapper.c:690: error: ‘nreader’ undeclared (first use in this function) pcsc-wrapper.c:693: error: ‘card_state’ undeclared (first use in this function) pcsc-wrapper.c:693: error: ‘card_protocol’ undeclared (first use in this function) pcsc-wrapper.c:694: error: too many arguments to function ‘pcsc_status’ pcsc-wrapper.c: In function ‘handle_transmit’: pcsc-wrapper.c:716: error: ‘pcsc_dword_t’ undeclared (first use in this function) pcsc-wrapper.c:716: error: expected ‘;’ before ‘recv_len’ pcsc-wrapper.c:729: error: ‘pcsc_protocol’ undeclared (first use in this function) pcsc-wrapper.c:734: error: ‘recv_len’ undeclared (first use in this function) pcsc-wrapper.c:736: warning: passing argument 4 of ‘pcsc_transmit’ makes pointer from integer without a cast pcsc-wrapper.c:736: error: too many arguments to function ‘pcsc_transmit’ pcsc-wrapper.c: In function ‘handle_control’: pcsc-wrapper.c:756: error: ‘pcsc_dword_t’ undeclared (first use in this function) pcsc-wrapper.c:756: error: expected ‘;’ before ‘ioctl_code’ pcsc-wrapper.c:757: error: expected ‘;’ before ‘recv_len’ pcsc-wrapper.c:763: error: ‘ioctl_code’ undeclared (first use in this function) pcsc-wrapper.c:767: error: ‘recv_len’ undeclared (first use in this function) pcsc-wrapper.c:769: error: too many arguments to function ‘pcsc_control’ make[2]: *** [gnupg_pcsc_wrapper-pcsc-wrapper.o] Error 1 make[1]: *** [all-recursive] Error 1 make: *** [all] Error 2
Your version of the GPG source code has a typo on line 69 in scd/pcsc-wrapper.c: someone mistyped unsigned as unsinged. Fix that, and GPG will compile.
Problem building GPG
1,541,010,146,000
The pass program is a command line utility to store passwords plus free form extra data in small files encrypted with gpg. It provides a grep sub-command in particular to find passwords by the extra data. But this grep sub-command is painfully slow on my machine. I have nearly 200 passwords stored and internally each file is decrypted with gpg like so (without the time in front, of course): % time gpg -d --quiet --yes --compress-algo=none --no-encrypt-to stackoverflow.gpg the password output user=0,000 sys=0,006 wall=0,382 (1,61) Wall time is nearly 0.4 seconds which adds up to around 1 minute to grep through all files. The gpg-agent is running and I have this version: gpg (GnuPG) 2.2.27 Two suspicions why this is slow: Startup of gpg and communication with gpg-agent is slow, supported by the fact that user+sys times are small in comparison. gpg-agent is slow, supported by the fact that after a pass grep run its cumulative CPU time is increased by 60 seconds, nicely matching the total time of the complete run. Together, both point to gpg-agent, though I have no idea why the agent should be so slow. With ps I see it running as /bin/gpg-agent --sh --daemon Can someone shed some light on whether ~ 0.3 CPU seconds is reasonable for the agent per file or whether there is a way to improve this? EDIT: Further Findings Attaching strace to the agent, I find this: 20200 14:57:03.701648 getrusage(RUSAGE_SELF, {ru_utime={tv_sec=133, tv_usec=890780}, ru_stime={tv_sec=0, tv_usec=99975}, ...}) = 0 20200 14:57:03.701666 clock_gettime(CLOCK_PROCESS_CPUTIME_ID, {tv_sec=133, tv_nsec=990762100}) = 0 20200 14:57:04.063523 getpid() = 18035 where we have 360ms between clock_gettime and the getpid call. And with ltrace: 20472 15:04:55.035574 strlen("my-password-here") = 10 20472 15:04:55.035641 gcry_kdf_derive(0x7d884b82c008, 10, 19, 2) = 0 20472 15:04:55.394727 gcry_cipher_setkey(0x7d884b82cbc0, 0x7d884b82c030, 16, 0x7d884b83c000) = 0 So gcry_kdf_derive takes 360ms. Whatever it does, can I get it to cache its result for a few seconds with some config setting. (... goes fetching the source code).
KDF's are Key Derivation Functions that convert a password into a cryptographic key. It needs this to decrypt your passwords. To keep passwords from being guessed they are all intentionally slow when first designed. Generally GPG will cache a private key once you've entered the password for it or used it recently. It think it only does this on a per-key basis though. If these keys are really encrypted with the same key from the KDF (and not incorporating unique information from the key itself in each KDF call) then you could keep the results from the KDF probably only by writing your own code from gcrypt up. The alternative would be using a master key in GPG that's unlocked with a single KDF and then used to decrypt the individual keys and passwords while it's cached. I have no idea if either of these work with your system.
Decrypting multiple files quicker with gpg
1,541,010,146,000
On server there is no browser nore gui, so is it possible to disable the gpg-agent-browser.socket ? On debian 10.13 I can see those files: /usr/lib/systemd/user/gpg-agent.socket /usr/lib/systemd/user/gpg-agent.service /usr/lib/systemd/user/gpg-agent-ssh.socket /usr/lib/systemd/user/sockets.target.wants/gpg-agent.socket /usr/lib/systemd/user/sockets.target.wants/gpg-agent-ssh.socket /usr/lib/systemd/user/sockets.target.wants/gpg-agent-extra.socket /usr/lib/systemd/user/sockets.target.wants/gpg-agent-browser.socket /usr/lib/systemd/user/gpg-agent-extra.socket /usr/lib/systemd/user/gpg-agent-browser.socket I could not find the answer on Internet, do you have an idea on howto disable the gpg-agent-browser stuff ? Thanks & cheers
This is a user service, so it’s only active in user sessions started by systemd. On a server, it’s likely you don’t have any most of the time. If you do want to disable it, you need to do so for each user: systemctl --user disable gpg-agent-browser.socket
debian howto disable gpg-agent-browser.socket
1,541,010,146,000
I have been losing my mind due to IntelliJ not wanting to commit my code. I had put export GPG_TTY=$(tty) into my .bash_profile instead of my .bashrc; echoing $GPG_TTY responded with the proper path, but I still kept getting the gpg: failed to sign the data error. From what I gather, .bash_profile is read and executed when Bash is invoked as an interactive login shell, while .bashrc is executed for an interactive non-login shell; $PATH variables should thus go into the .bash_profile...? Or, at least, so I thought. I'd like to know what the functional difference is between the two: why didn't it work from my profile but it does from the rc?
Solution to a problem: put the export var into .bashrc, and call the .bashrc from .bash_profile. Yes, the difference is login vs non-login. The bash would be in login mode, when you login to bash, for example through ssh or on a non-gui machine. But once you logged in, all new instances of bash would be started in non-login mode. If you having a GUI desktop, you will login into WM and not into the bash. So you would never have bash in a login mode (unless you specifically call it with a --login option). So it is kind of pointless to even have .bash_profile on a workstation with GUI. But you still can have it, in case you will login to that machine-user through ssh, or your WM break and you need to start a recovery procedures. And of course, the official documentation is a must read: https://www.gnu.org/savannah-checkouts/gnu/bash/manual/bash.html#Bash-Startup-Files
How does bashrc function differently from bash_profile?
1,541,010,146,000
How do I install gpgv, the stripped-down version of gpg (and one of the helper tools)? I'm looking for a solution that I can use on Photon OS containers, without installing the full gpg. I know that on Ubuntu it can be done with sudo apt install gpgv2 which I see only takes "52.2 kB of additional disk space". Photon OS uses yum, so I tried: # yum install gpgv2 gpgv2 package not found or not installed Error(1011) : No matching packages # yum install gpgv gpgv package not found or not installed Error(1011) : No matching packages # yum search gpgv Error(1599) : No matches found # yum search gpg photon-repos : Photon repo files, gpg keys password-store : A secure password manager for unix systems. photon-repos : Photon repo files, gpg keys gnupg : OpenPGP standard implementation used for encrypted communication and data storage. gpgme : High-Level Crypto API gpgme-devel : Static libraries and header files from GPGME, GnuPG Made Easy. libgpg-error : libgpg-error libgpg-error-devel : Libraries and header files for libgpg-error libgpg-error-lang : Additional language files for libgpg-error tdnf-plugin-repogpgcheck : tdnf plugin providign gpg verification for repository metadata gnupg : OpenPGP standard implementation used for encrypted communication and data storage. gpgme : High-Level Crypto API gpgme-devel : Static libraries and header files from GPGME, GnuPG Made Easy. libassuan-devel : GnuPG IPC library password-store : A secure password manager for unix systems. photon-repos : Photon repo files, gpg keys tdnf-plugin-repogpgcheck : tdnf plugin providign gpg verification for repository metadata Eventually I settled with yum install gnupg which does provide gpgv but it comes at a cost of 11.80M. Is there a manual or alternative way of getting only gpgv? That's all I need in the container, because it's required for signature verification. Thank you!
You can get what you want using a multi-stage Dockerfile: FROM docker.io/photon:latest AS builder RUN yum -y install gnupg FROM docker.io/photon:latest COPY --from=builder /usr/bin/gpgv /usr/bin COPY --from=builder /lib64/libgcrypt.so.20 /lib64/libgcrypt-error.so.0 /lib64/ But unless you are running in an extremely resource constrained environment, I would install the gnupg package and leave it at that (or switch base distributions to something that does package gpgv separately).
How to install standalone gpgv signature verification tool
1,541,010,146,000
Warning info "This key is not certified with a trusted signature!" when to verify apache : wget https://downloads.apache.org/accumulo/1.10.2/accumulo-1.10.2-bin.tar.gz wget https://downloads.apache.org/accumulo/1.10.2/accumulo-1.10.2-bin.tar.gz.asc wget https://downloads.apache.org/accumulo/KEYS gpg --import KEYS gpg --verify accumulo-1.10.2-bin.tar.gz.asc accumulo-1.10.2-bin.tar.gz An error info occurs: gpg: Signature made Tue 08 Feb 2022 11:04:00 PM HKT gpg: using RSA key 8CC4F8A2B29C2B040F2B835D6F0CDAE700B6899D gpg: Good signature from "Christopher L Tubbs II (Christopher) <[email protected]>" [unknown] gpg: aka "Christopher L Tubbs II (Developer) <[email protected]>" [unknown] gpg: aka "Christopher L Tubbs II (Developer) <[email protected]>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: 8CC4 F8A2 B29C 2B04 0F2B 835D 6F0C DAE7 00B6 899D I want to trust it fully: gpg --edit-key 8CC4F8A2B29C2B040F2B835D6F0CDAE700B6899D gpg (GnuPG) 2.2.27; Copyright (C) 2021 Free Software Foundation, Inc. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. pub rsa4096/6F0CDAE700B6899D created: 2012-10-13 expires: 2024-01-12 usage: SC trust: full validity: unknown sub rsa4096/2FFC0085C23D3DA9 created: 2012-10-13 expires: 2024-01-12 usage: E sub rsa4096/4417A0C14245D003 created: 2013-04-28 expires: 2024-01-12 usage: A [ unknown] (1). Christopher L Tubbs II (Christopher) <[email protected]> [ unknown] (2) Christopher L Tubbs II (Developer) <[email protected]> [ unknown] (3) Christopher L Tubbs II (Developer) <[email protected]> gpg> trust pub rsa4096/6F0CDAE700B6899D created: 2012-10-13 expires: 2024-01-12 usage: SC trust: full validity: unknown sub rsa4096/2FFC0085C23D3DA9 created: 2012-10-13 expires: 2024-01-12 usage: E sub rsa4096/4417A0C14245D003 created: 2013-04-28 expires: 2024-01-12 usage: A [ unknown] (1). Christopher L Tubbs II (Christopher) <[email protected]> [ unknown] (2) Christopher L Tubbs II (Developer) <[email protected]> [ unknown] (3) Christopher L Tubbs II (Developer) <[email protected]> Please decide how far you trust this user to correctly verify other users' keys (by looking at passports, checking fingerprints from different sources, etc.) 1 = I don't know or won't say 2 = I do NOT trust 3 = I trust marginally 4 = I trust fully 5 = I trust ultimately m = back to the main menu Your decision? 4 pub rsa4096/6F0CDAE700B6899D created: 2012-10-13 expires: 2024-01-12 usage: SC trust: full validity: unknown sub rsa4096/2FFC0085C23D3DA9 created: 2012-10-13 expires: 2024-01-12 usage: E sub rsa4096/4417A0C14245D003 created: 2013-04-28 expires: 2024-01-12 usage: A [ unknown] (1). Christopher L Tubbs II (Christopher) <[email protected]> [ unknown] (2) Christopher L Tubbs II (Developer) <[email protected]> [ unknown] (3) Christopher L Tubbs II (Developer) <[email protected]> gpg> quit Then to verify again: gpg --verify accumulo-1.10.2-bin.tar.gz.asc accumulo-1.10.2-bin.tar.gz gpg: Signature made Tue 08 Feb 2022 11:04:00 PM HKT gpg: using RSA key 8CC4F8A2B29C2B040F2B835D6F0CDAE700B6899D gpg: Good signature from "Christopher L Tubbs II (Christopher) <[email protected]>" [unknown] gpg: aka "Christopher L Tubbs II (Developer) <[email protected]>" [unknown] gpg: aka "Christopher L Tubbs II (Developer) <[email protected]>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: 8CC4 F8A2 B29C 2B04 0F2B 835D 6F0C DAE7 00B6 899D How can suppress the warning info when to verify apache ?
Either setting trust to ultimate (5), or signing the key, will do the trick (but see the caveat below!). Option 1: set trust to ultimate $ gpg --edit-key 8CC4F8A2B29C2B040F2B835D6F0CDAE700B6899D [...] gpg> trust [...] Please decide how far you trust this user to correctly verify other users' keys (by looking at passports, checking fingerprints from different sources, etc.) 1 = I don't know or won't say 2 = I do NOT trust 3 = I trust marginally 4 = I trust fully 5 = I trust ultimately m = back to the main menu Your decision? 5 Do you really want to set this key to ultimate trust? (y/N) y [...] gpg> quit Notice that I entered 5 at the trust prompt. Now when I run the verify command: $ gpg --verify accumulo-1.10.2-bin.tar.gz.asc accumulo-1.10.2-bin.tar.gz There is no longer a warning in the output. On the other hand, I did lie when I set the trust to ultimate. Option 2: sign the key Since you do not ultimately trust the key, it is more correct to sign the key with your own, ultimately trusted, key. If you want to do some diligence first, see the caveat. To sign the key: $ gpg --sign-key 8CC4F8A2B29C2B040F2B835D6F0CDAE700B6899D [...] Really sign all user IDs? (y/N) y [...] Really sign? (y/N) y Again there is no warning when I run the verify command, and this time I did not have to lie. Caveat Do be aware that the warning is there for good reason. If you want to spend some more effort trying to determine whether you do trust the key, before signing it or marking it as ultimately trusted, this security.stackexchange.com thread is a good starting point.
How can suppress the warning info when to verify apache?
1,541,010,146,000
I want to use the pass credential store for DockerHub login. Therefore, following mainly this link I installed pass (apt install pass) I installed docker-credential-pass, but following the instruction 4 to 7 under How to set up credential storage in this other link I modified the ~/.docker/config.json file adding the key-value pair "credsStore": "pass" Since I have a GPG ID, which I can see by means of the instruction gpg --list-secret-keys or also gpg -K (in the row next to uid I see [ultimate] MY_NAME <MY_EMAIL>), and which I use often to encrypt and decrypt some files, following again the second link or also the point 2 in this one, I did pass init MY_EMAIL. Here I got mkdir: created directory '/home/user/.password-store/' Password store initialized for MY_EMAIL So, it seems that all goes right until here, but then, when I try either docker login or pass insert docker-credential-helpers/docker-pass-initialized-check I get gpg: error retrieving 'MY_EMAIL' via WKD: No data gpg: MY_EMAIL: skipped: No data gpg: [stdin]: encryption failed: No data Password encryption aborted. However, as I said before, I use often gpg --output out_file.gpg --encrypt --recipient MY_EMAIL input_file without any problem. Further, the solution in gpg: error retrieving '[email protected]' via WKD does not seem suitable to my issue, since when I look for my key with the gpg commands I mentioned above, I can see expires: 2023-07-20]. So, which is now the problem and what can I do? I work on Debian 11. Maybe the issue is related to the next claim under https://github.com/docker/docker-credential-helpers: `pass` needs to be configured for `docker-credential-pass` to work properly. It must be initialized with a `gpg2` key ID. Make sure your GPG key exists is in `gpg2` keyring as `pass` uses `gpg2` instead of the regular `gpg`. What should I do if this is the problem? I have also tried doing pass init ID instead of pass init MY_EMAIL getting the ID from gpg2 --list-secret-keys --keyid-format=long and the line sec rsa3072/ID date ..., getting a problem such as in here, but the solutions given there do not work too. Thanks in advance!
Solved! The issue was that "my" old GPG ID belonged to my root user, so I needed to generate a key with my non-root user
`gpg: error retrieving "email" via WKD: No data` when trying credential storage for DockerHub login
1,541,010,146,000
I am trying to install https://electrum.org/#download for that I am following the instructions, but when I trying to verify the signature, I have the following error: └─$ gpg --verify Electrum-4.1.5.tar.gz.ThomasV.asc Electrum-4.1.5.tar.gz 2 ⨯ gpg: Signature made Mon 19 Jul 2021 09:22:29 PM MSK gpg: using RSA key 6694D8DE7BE8EE5631BED9502BD5824B7F9470E6 gpg: Can't check signature: No public key ┌──(katya12㉿kali)-[/home/katya/soft/electrum] └─$ gpg --import Electrum-4.1.5.tar.gz.ThomasV.asc 2 ⨯ gpg: no valid OpenPGP data found. gpg: Total number processed: 0 There is an instruction On Linux, you can import that key using the following command: gpg --import ThomasV.asc. But there is no ThomasV.asc file anywhere. Could you tell me how to verify the signature of last Electrum version for Linux? UPDATE: I have download the key from different source (which is not mentioned in instructions) https://raw.githubusercontent.com/spesmilo/electrum/master/pubkeys/ThomasV.asc then tried to verify the signature with it with following error: └─$ sudo gpg --import ThomasV.asc 2 ⨯ [sudo] password for katya12: gpg: key 2BD5824B7F9470E6: "Thomas Voegtlin (https://electrum.org) <[email protected]>" not changed gpg: Total number processed: 1 gpg: unchanged: 1 ┌──(katya12㉿kali)-[/home/katya/soft/electrum] └─$ sudo gpg --verify ThomasV.asc Electrum-4.1.5.tar.gz 2 ⨯ gpg: verify signatures failed: Unexpected error
gpg: Can't check signature: No public key You need to gpg --import public keys. here is a list of Electrum pubkeys And then sign them with your own private key (Which means they are trusted by you) in the end: gpg --verify signature-of-file.asc file
How to verify Electrum signature in Linux?
1,541,010,146,000
I want to do regular backups of my remote VPS via a cronjob. Both systems run Debian 10. I have been following this guide and tweaked it to my liking. Relevant parts of the script: /root/.local/bin/backup #!/bin/bash SSH_AUTH_SOCK=$(gpgconf --list-dirs agent-ssh-socket) rsync -avzAHXh -e ssh root@[someIP]:/path/to/remote/dir /path/to/local/dir \ || { echo "rsync died with error code $?"; exit 1; } When I run this from the terminal, everything works fine. However if I run it via a cronjob: crontab -u root -e # m h dom mon dow command 0 6 * * * /root/.local/bin/backup >> /var/log/backup 2>&1 Then /var/log/backup shows: root@[someIP]: Permission denied (publickey).^M rsync: connection unexpectedly closed (0 bytes received so far) [Receiver] rsync error: error in rsync protocol data stream (code 12) at io.c(235) [Receiver=3.1.3] rsync died with error code 12 What is going wrong in the cronjob and what can I do about it? PS: I have deleted the passphrase for the gpg key I use here, trying to make this work. Ideally I would like a solution that works even when I add a passphrase again.
Commands executed by Cron have an extremely basic execution environment and path setting, a frequent error is to test a command or script as your normal user id. Then it fails when run by Cron. Exported Environment variables are often overlooked. It is prudent to do a final test, as root, with a completely cut down environment, as a final test, use a sub shell, before deployment. You can always add a one off cronjob, that simply prints it's env to a log file, this enables you to emulate the exact conditions your command will run with, when invoked by Cron. A second advantage, if any error occurs, is that it can be displayed on your terminal, it's easier to debug this way. It looks like the variable assigned in your script is not exported, so SSH will not pick it up. It's also important to use absolute file paths in a script, unless you change directory specifically, you can't assume you are in a particular directory, printing the working directory in the one-off test script, previously mentioned helps as well. You can not assume all distribution s are identical in this respect. It certainly doesn't hurt to check.
Remote Server: Permission denied (publickey) when using rsync with ssh with gpg via cronjob
1,541,010,146,000
So I've got a situation where programX.py creates files such as file1.txt. How could one make it so that only programX.py has the rights to edit (write to) file1.txt? So anything else (such as a user) cannot edit file1.txt - they would only be able to read from it. I've read up about digital signatures (gpg), but I'm not sure if it would be feasible in terms of system resources to unlock the file -> write -> lock file whenever there is something to be written to the file. Ideally, the (read-only except for programX.py) mechanism would already be applied to the file under its properties when it is created by programX.py for the first time - that way there would be no need to be constantly unlocking and locking the files.
Run programX.py as a separate functional user account and group. This will cause all files created by that program to be owned by the functional account. Set the umask in the functional accounts profile to 0077. This will limit access to the files to the functional account only. From there, within programX.py you can set the mode of the file you wish to allow others to read when you create the file (ie mode 640 or 644).
Make files read-only - except for the program that creates them
1,541,010,146,000
I use Gnome 3.36.3 (Ubuntu 20.04.1) with GnuPG 2.2.19 as SSH agent. I tell OpenSSH where to find the GnuPG SSH agent socket in my ~/.profile file: export SSH_AUTH_SOCK=$(gpgconf --list-dirs agent-ssh-socket) This works fine in terminal windows. Whenever I connect to a remote SSH server, I get prompted for my PIN. But when I try to connect to a remote SFTP server with Nautilus, I get errors, either an message like "not authorized!" or I am asked for username and password (which of course won't work as I use keys only). As a manual workaround I found out, that after a pkill gvfsd and a complaining beep from Nautilus, SFTP connections will work as expected. So apparently gvfs is not aware of my GnuPG SSH agent after login. but will be later, when killed and restarted. What do I have to do, to make gvfs aware of my GnuPG SSH agent, wwithout killing manually first?
TLDR: sudo chmod +x /lib/systemd/user-environment-generators/90gpg-agent && reboot The Longread: When opening remote locations with Nautilus, the gvfs-daemon takes care of connecting to the remote system and mounting its file-system in the background. gvfs is managed by systemd. And systemd seems to manage its own environment independently of what you put in ~/.profile or ~/.bashrc. Somehow later in the session, i.e. when a bash terminal window was opened, the environment setup ~/.profile and ~/.bashrc might be available to newly started processes, but not right from the beginning. I am not really sure how this part works, but the resulting facts are easily observable: If you look at the environment of the gvfs-daemon.service right after login you see the following: # Get the PID of the running gvfsd $ FIXME=$(systemctl --user show --property="MainPID" gvfs-daemon.service | grep -o '[0-9]*') # Show the environment of the running gvfsd $ tr "\0" "\n" < "/proc/${FIXME}/environ" You will see that the environment is a lot smaller compared to your bash session with the env command: More importantly, there is no SSH_AUTH_SOCK variable in the gvfsd process environment. If you restart the the gvfs service now from your bash terminal and look at the environment again: $ systemctl --user restart gvfs-daemon.service $ FIXME=$(systemctl --user show --property="MainPID" gvfs-daemon.service | grep -o '[0-9]*') $ tr "\0" "\n" < "/proc/${FIXME}/environ" Now the SSH_AUTH_SOCK variable is present, the SFTP client invoked by the gvfs-daemon will use the gpg-agent-socket as SSH agent, and remote folders in Nautilus will work as expected. The Debian GnuPG package maintainers where aware of this problem and made a systemd user environment generator script available at /lib/systemd/user-environment-generators/90gpg-agent. However that script has its executable bit not set, and therefore is probably never started. After making it executable and rebooting the system, SFTP connections open as expected right from the beginning. $ sudo chmod +x /lib/systemd/user-environment-generators/90gpg-agent $ reboot Note: While researching this I was mislead several times, because the environment keeps changing while working on the system. To be sure, always do a full system reboot. Just logging out and log back in of your Gnome session, apparently doesn't reset everything to the same state as right after a system start.
How to start GnuPG SSH Agent for gvfs?
1,541,010,146,000
I am trying to verify the file https://clientupdates.dropboxstatic.com/dbx-releng/client/dropbox-lnx.x86_64-108.4.453.tar.gz using the signature provided here: https://clientupdates.dropboxstatic.com/dbx-releng/client/dropbox-lnx.x86_64-108.4.453.tar.gz.asc I am using the following command: gpg --verify dropbox-lnx.x86_64-108.4.453.tar.gz.asc which results in the following output: gpg: assuming signed data in 'dropbox-lnx.x86_64-108.4.453.tar.gz' gpg: Signature made Tue 20 Oct 2020 10:53:17 PM CEST gpg: using RSA key FC918B335044912E gpg: Can't check signature: No public key My GPG-configuration in ~/.gnupg/gpg.conf looks like this: keyserver keyserver.ubuntu.com auto-key-retrieve with emphasis on the auto-key-retrieve. This setup works, as in: I successfully verified files with it before, even if I didn't have the public key in question; it was retrieved as expected during verification. gpg --search-keys FC918B335044912E also shows that the key can be found on the keyserver I'm using. I can also gpg --recv-keys it, after which the verification obviously works. My question, which might stem from a misunderstanding of the operations of gpg is: Why can I manually get the key in question, but not automatically using auto-key-retrieve, even though I know it works with other keys?
Having dug a bit deeper into this, I think I can now at least partly answer this myself. Any supplemental information is of course still welcome. :) The reason is probably the fact that the given signature does not contain the fingerprint of the key, but only the key ID. You can see this for example with gpg --list-packets. For the signature in question you get :signature packet: algo 1, keyid FC918B335044912E version 4, created 1603227197, md5len 0, sigclass 0x00 digest algo 8, begin of digest fd b9 hashed subpkt 2 len 4 (sig created 2020-10-20) subpkt 16 len 8 (issuer key ID FC918B335044912E) data: [2046 bits] For another signature, where auto-key-retrieve worked, it looked like this: :signature packet: algo 1, keyid BCAA30EA9C0D5763 version 4, created 1543944543, md5len 0, sigclass 0x00 digest algo 10, begin of digest 84 51 hashed subpkt 33 len 21 (issuer fpr v4 1A4E8B7277C42E53DBA9C7B9BCAA30EA9C0D5763) hashed subpkt 2 len 4 (sig created 2018-12-04) subpkt 16 len 8 (issuer key ID BCAA30EA9C0D5763) data: [2046 bits] with emphasis on the line hashed subpkt 33 len 21 (issuer fpr v4 1A4E8B7277C42E53DBA9C7B9BCAA30EA9C0D5763) man gpg clearly states in the section about auto-key-retrieve: The order of methods tried to lookup the key is: [...] If any keyserver is configured and the Issuer Fingerprint is part of the signature (since GnuPG 2.1.16), the configured keyservers are tried. (Emphasis mine.) --recv-keys on the other hand does look up by key ID, as well. Sidenote: The Dropbox key not having the fingerprint is weird. Part of the reason for removing key ID lookup, given in this commit from 2019-07-05, was that even then the fingerprint was apparently included in signatures by default for a long time. The Dropbox signature is from 2020-10-20. Get your act together, Dropbox! xP
`gpg --recv-keys` works, but `auto-key-retrieve` does not
1,541,010,146,000
When the same string is encrypted with gpg2 multiple times, the result would differ, even when the encryption key is the same. $ echo "secret message" | gpg2 --batch --passphrase-file /tmp/key --output - --symmetric > /tmp/r $ xxd r 00000000: 8c0d 0409 0302 49c1 3718 910a c1ca f3d2 ......I.7....... 00000010: 4401 85a4 6885 26ef 7d4f c403 984d 6c03 D...h.&.}O...Ml. 00000020: 8c68 9ba9 4ea6 b214 2e9c 474a 0666 be52 .h..N.....GJ.f.R 00000030: 5d79 53cd d24b 387f 56e1 3a22 4401 a407 ]yS..K8.V.:"D... 00000040: 881b c641 8b10 b1e7 6662 aaee 3382 7151 ...A....fb..3.qQ 00000050: 565b 172e 74 V[..t $ echo "secret message" | gpg2 --batch --passphrase-file /tmp/key --output - --symmetric > /tmp/r $ xxd r 00000000: 8c0d 0409 0302 dde5 397c 8bfa 4c29 f3d2 ........9|..L).. 00000010: 4401 ca3d bba8 8259 b9e9 7a18 4031 9e86 D..=...Y..z.@1.. 00000020: 4861 ddca 8bf3 dbff f4c7 c40e be3f 4092 Ha...........?@. 00000030: 5dec 4dab ef31 3712 1fa3 76e1 4381 ed6f ].M..17...v.C..o 00000040: bb0d ca49 be0d 4256 9049 2468 07da 3ba7 ...I..BV.I$h..;. 00000050: c338 74e8 d4 .8t.. This is happening because every time gpg2 runs, it uses two random blocks at the beginning of the stream. How do I force gpg2 to always produce the same output for the same input? Some may wonder, why do I need such a thing. In fact, I'm using gpg2 to encrypt the files before sending them off-site for a backup. I want to be able to resume a backup of a large file if it is interrupted (for whatever reason: an issue with the network, a remote server crash, etc.) With deterministic encryption, this is easy: get the number of uploaded bytes (of encrypted content), encrypt the file again, check the hash of the N bytes, and if they match, continue with the remaining ones. If the encryption result is not deterministic, however, it is impossible to resume the uploads.
You can't. The most commonly accepted definitions of security (e.g., semantic security and adaptive chosen-ciphertext security) do not admit deterministic encryption. To see why, imagine that I ask you two yes/no questions, and that you encrypt the answers to me deterministically under some pre-shared key. With deterministic encryption, any eavesdropper would learn whether or not your answer to the two questions is the same. There are of course some less secure forms of encryption targeted at niche use cases (e.g., convergent encryption). But a general-purpose encryption tool such as gpg needs to provide better security, since it is intended for general use. Update If you want a reference, you can look at the GPG source code in agent/protect.c. You will see that the IV is getting set from gcry_create_nonce, which you can find documented in the libgcrypt manual as: Fill buffer with length unpredictable bytes. This is commonly called a nonce and may also be used for initialization vectors and padding. This is an extra function nearly independent of the other random function for 3 reasons: It better protects the regular random generator’s internal state, provides better performance and does not drain the precious entropy pool.
How do I force `gpg2` to always produce the same output for the same input?
1,541,010,146,000
I followed these instructions for installing docker on debian 9.11 "stretch" https://docs.docker.com/engine/install/debian/ My file /etc/apt/sources.list looks like this deb http://repo.myloc.de/debian stretch main non-free contrib deb-src http://repo.myloc.de/debian stretch main non-free contrib deb http://repo.myloc.de/debian-security stretch/updates main deb-src http://repo.myloc.de/debian-security stretch/updates main deb http://repo.myloc.de/debian stretch-updates main deb-src http://repo.myloc.de/debian stretch-updates main deb https://download.docker.com/linux/debian stretch stable #deb-src [arch=amd64] https://download.docker.com/linux/debian stretch stable The command curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add - gives OK but apt update results in W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: https://get.docker.com/ubuntu docker Release: The following signatures were invalid: 36A1D7869245C8950F966E92D8576A8BA88D21E9 W: Failed to fetch https://get.docker.com/ubuntu/dists/docker/Release.gpg The following signatures were invalid: 36A1D7869245C8950F966E92D8576A8BA88D21E9 W: Some index files failed to download. They have been ignored, or old ones used instead. I am confused by the mentioning of ubuntu there, but probably that's fine. EDIT: additional information requested in a comment > uname -a Linux b028 4.9.0-12-amd64 #1 SMP Debian 4.9.210-1 (2020-01-20) x86_64 GNU/Linux > grep ^deb /etc/apt/sources.list.d/* /etc/apt/sources.list.d/docker.list:deb https://get.docker.com/ubuntu docker main /etc/apt/sources.list.d/docker.list.save:deb https://get.docker.com/ubuntu docker main /etc/apt/sources.list.d/nodesource.list:deb https://deb.nodesource.com/node_0.12 wheezy main /etc/apt/sources.list.d/nodesource.list:deb-src https://deb.nodesource.com/node_0.12 wheezy main /etc/apt/sources.list.d/nodesource.list.save:deb https://deb.nodesource.com/node_0.12 wheezy main /etc/apt/sources.list.d/nodesource.list.save:deb-src https://deb.nodesource.com/node_0.12 wheezy main The content of the sources.list.d directory could be remnants that are wrong. Probably, I have to delete them.
Yes , you need to remove the docker repository under /etc/apt/sources.list.d/ (It is not a valid docker repo, it is an url to docker installation script): sudo rm /etc/apt/sources.list.d/docker.list{,.save} Then edit your sources.list: sudo apt edit-sources Change the following line: deb https://download.docker.com/linux/debian stretch stable to deb [arch=amd64] https://download.docker.com/linux/debian stretch stable Then run: sudo apt update sudo apt install docker-ce
gpg error installing docker on debian stretch
1,541,010,146,000
During work with RPM packages I frequently need to validate signatures against available GPG keys. Using rpm -qip --nosignature <package.rpm> | grep Signature gives me an Key ID, i.e.: Signature : RSA/SHA1, Mon 28. Aug 2019 06:00:00 AM CET, Key ID 1234567890abcdef whereby gpg --with-fingerprint <RPM-GPG-KEY-package> gives me a Key Fingerprint: Key fingerprint = 0987 6543 21FE DCBA 0987 6543 21FE 1234 5678 90AB CDEF Since it is not easy to compare both outputs, how to get the mentioned Key ID instead of the whole fingerprint?
During research I've found that the Key ID are usually the last 8 or 16 bytes of the Key Fingerprint. So I wanted to extract just them from the output. How to achieve that? I've found the following approach which seems to be working: keyID.sh #! /bin/bash KEY_PATH=$1 KEY_FINGERPRINT=$(gpg --with-fingerprint ${KEY_PATH} | grep "Key fingerprint" | cut -d "=" -f 2 | tr -d ' ' | tr '[:upper:]' '[:lower:]') echo ${KEY_FINGERPRINT} | grep -o '.\{8\}$' echo ${KEY_FINGERPRINT} | grep -o '.\{16\}$'
Compare Key ID of RPM package with Key Fingerprint of RPM-GPG-KEY
1,541,010,146,000
I transferred my keys using unison to another machine. On the other machine, gpg cannot find any keys. $ gpg --list-secret-keys $ list-secret-keys does not output anything. $ ls -lha .gnupg/ total 76K drwx------ 5 alex alex 4,0K Mär 8 23:38 . drwxr-xr-x 116 alex alex 36K Mär 8 23:11 .. drwx------ 2 alex alex 4,0K Mär 8 23:38 crls.d -rw------- 1 alex alex 2,9K Dez 15 2017 dirmngr.conf -rw------- 1 alex alex 5,1K Dez 15 2017 gpg.conf drwx------ 2 alex alex 4,0K Mär 8 23:38 openpgp-revocs.d drwx------ 2 alex alex 4,0K Mär 8 23:38 private-keys-v1.d -rw------- 1 alex alex 32 Dez 15 2017 pubring.kbx -rw------- 1 alex alex 32 Mär 8 23:38 pubring.kbx~ -rw------- 1 alex alex 1,2K Dez 15 2017 trustdb.gpg but the files are there.. On the first machine: $ ls -lha .gnupg/ total 44K drwx------ 5 alex alex 4,0K Feb 10 22:16 . drwxr-xr-x 92 alex alex 4,0K Mär 9 10:14 .. drwx------ 2 alex alex 4,0K Feb 10 22:16 crls.d -rw------- 1 alex alex 2,9K Dez 26 2017 dirmngr.conf -rw------- 1 alex alex 5,1K Dez 26 2017 gpg.conf drwx------ 2 alex alex 4,0K Feb 10 20:37 openpgp-revocs.d drwx------ 2 alex alex 4,0K Feb 10 20:37 private-keys-v1.d -rw-r--r-- 1 alex alex 2,0K Feb 10 20:37 pubring.kbx -rw------- 1 alex alex 32 Dez 26 2017 pubring.kbx~ -rw------- 1 alex alex 1,3K Feb 10 22:14 trustdb.gpg $ gpg --list-secret-keys /home/alex/.gnupg/pubring.kbx ----------------------------- sec rsa3072 2019-02-10 [SC] [expires: 2021-02-09] 9806B421CC66EC0E4F1xxxxxxxxxx1B700F021CA uid [ultimate] A K <[email protected]> ssb rsa3072 2019-02-10 [E] [expires: 2021-02-09]
Note that pubring.kbx is sized 2,0K on the first machine, but only 32 bytes on the second machine. So either the file has different contents or the transfer was incomplete. The timestamp is older on the second machine too, so I'd guess the second machine got an out-of-date version of the file for some reason.
GPG Cannot Find Keys
1,541,010,146,000
How do I decrypt files that are stored in cascade subdirectories with gpg? Something like a bash script: for file in all_subdirs; do gpg --passphrase passphrase *.gpg
Two options; the first (given the bash tag): shopt -s globstar for file in **/*.gpg do gpg --passphrase passphrase "$file" done Alternatively, using the find command: find . -name '*.gpg' -exec gpg --passphrase passphrase {} \;
gpg decryption of multiple subdirectories
1,541,010,146,000
I import [E] subkey to different folder from ~/.gnupg, and export subkey's public key with --homedir option. I can see subkey's public key has less lines than master's plublic, using diff results that they has some starting lines the same, but then different lines at the bottom so at the end it's still different public key. My question: Are they different public key? (I still need double confirm here). If they're different then encryption/decryption with subkey is on its own and there's no related to master's key and other subkeys?
In asymmetric cryptography you always deal with key-pairs. For each secret key there is a corresponding public key. So to answer your first question: yes, the public key of a primary key pair is different from the public key of its subordinate key pair. I tried to reproduce your experiment and created a GnuPG test key with a primary key (ID 0xA6271DD4) and a subordinate key (ID 0x5336E1DC). I then exported the subordinate key to a file and checked, which packets it contains. $ gpg --export-secret-subkey 5336E1DC! > subkey.gpg $ gpg --list-packets subkey.gpg | grep "\(packet\|keyid\)" :secret key packet: keyid: 877AA505A6271DD4 :user ID packet: "testtest <test@test>" :signature packet: algo 1, keyid 877AA505A6271DD4 :secret sub key packet: keyid: B0389BEB5336E1DC :signature packet: algo 1, keyid 877AA505A6271DD4 $ Please note that both the user ID and the secret subordinate key are signed by the primary key. On the first look it seems that both the primary and subordinate secret key were exported. Show more info about the first secret packet. $ gpg --list-packets subkey.gpg | head # off=0 ctb=95 tag=5 hlen=3 plen=277 :secret key packet: version 4, algo 1, created 1546169910, expires 0 pkey[0]: [2048 bits] pkey[1]: [17 bits] gnu-dummy S2K, algo: 0, simple checksum, hash: 0 protect IV: keyid: 877AA505A6271DD4 # off=280 ctb=b4 tag=13 hlen=2 plen=20 :user ID packet: "testtest <test@test>" $ When exporting a secret key in GnuPG, the corresponding public key is always exported with it. So this secret key packet contains a public key of 2048 bits plus probably its 17 bits hash. But the secret key itself is missing, only a stub was exported: gnu-dummy S2K, algo: 0, simple checksum, hash: 0. To wrap it up: When exporting a secret sub key, you always export the public sub key and the public primary key (necessary to verify the signatures) with it. You write that your public sub key has fewer lines than your public master key. I was not able to reproduce that. With GnuPG you can export a public key without any of its subkeys, in the example above by the command gpg --export A6271DD4! > pubkey.gpg (please note the exclamation mark). On the other hand, it is not possible to export just a public sub key. But if comparing a master key with a master key plus its sub key, the latter one naturally has more lines. So to better understand your observation it would be good to know the exact commands you used.
GPG: [E] subkey's public key is the same as master's public key?
1,541,010,146,000
I'm trying to migrate a (PHP) software module into a new server (from a CentOS 6.9 to an Ubuntu 16.04). In a certain part of a process, the code tries to launch the following command: gpg --no-tty --sign --encrypt --armor --passphrase=whatever --local-user A188E1E4! --recipient A188E1E4 So I'm trying to export the private/public key pair from the working server. I'm stuck at this point, I've tried to manage it with this very similar question but I don't know the name and path where the key should be. I've tried to run gpg --list-keys but: gpg: directory '/home/my-username/.gnupg' created gpg: new configuration file /home/my-username/.gnupg/gpg.conf' created gpg: WARNING: options in '/home/my-username/.gnupg/gpg.conf' are not yet active during this run gpg: keyring '/home/my-usernameo/.gnupg/pubring.gpg' created gpg: /home/my-username/.gnupg/trustdb.gpg: trustdb created I don't know which user is the code using to launch the command. Maybe I would need a way to list all the keys, not only the current user, so I can see which key I need to export.
gpg will try to load the keys from ~/.gnupg and will not list "All keys in the system" as each user has a separate keyring. You cannot do this. In your case the gpg application will try to list the keys from www-data, php or what is the name of the user that owns the php process. You can change the location of the keys by manipulating the GNUPGHOME or HOME environment variable, or by setting the --homedir to gpg. This option is not available in all gpg versions.
List and export GPG keys
1,541,010,146,000
I would like sudo to always do as if I had passed the -A (--askpass) option. I would like that so that sudo always uses my gnupg agent and password store. It already works if I pass -A manually, but I would like -A to always be passed.
Would an alias be enough? You can go to your $HOME/.bashrc file and bellow other existing alias you can enter alias sudo='sudo -A' I have similar customized alias in my .bashrc file that work fine. Before to make it permanent, you can try it in your shell to verify that works ok.
Always passing -A (--askpass) to sudo
1,541,010,146,000
I would like to know about the safety of RSA based encryption, especially when used to encrypt files with gpg or encrypt connections over ssh. Is it possible to reconstruct a working private key, given some public key? Is it easier to decrypt one small file? Or is the complexity of decrypting a small file equal to the complexity needed to reconstruct a whole private key? How does the bit strength affect the security/complexity of an RSA encryption? As far as I know RSA (like every asymmetric encryption) needs a high entropy to generate a matching private/public key pair in contrast to for example AES, because symmetric encryption really "uses" each bit entropy directly for encryption?! Is the rise of complexity to decrypt some file by increasing the bit strength exponential? Why are most RSA encryption tools limited to 4096 bits strength?
You should read the manual, it explains a lot. I'll comment mostly. The public key in RSA consists of 2 large carefully chosen primes that are multiplied. To attack RSA, the attacker needs to find these factors. So yes, actually, when you publish your public key the attacker knows what number to factorize. But in fact the public keys are meant to be public and safe to publish, assuming the mathematics around factorisation of large numbers stays hard. RSA in PGP is not used to encrypt data directly. It encrypts a symmetric key. Here is the other opportunity for the attacker, but the symmetric key is of course a random number for every message in PGP. In this way you don't need to encrypt data multiple times for every recipient (it would multiply the amount of data), but you encrypt the symmetric key for each recipient. Decryption of small and large files is not relevant on RSA side, because of the symmetric cipher used there. A cipher is strong when the best approach to directly decipher it is brute force. Assuming this is the best approach that the attacker has, bit length of the key makes the problem exponentially harder. I am not sure why it is 4096 bit as maximum, to be honest, but I can imagine that you need multiple algorithms that need to be proven secure, working properly and be efficient/usable for the user.
RSA: private key safety? (ssh/gpg) [closed]
1,541,010,146,000
How can I choose which software I want to install from a Debian repository? I know that didn't make much sense, let me explain more detail. I want to install a «unstable» version of gnupg with (ECC support), but I'm afraid of adding a «unstable» repository to my sources.list file, because it will mess up other sotware when i run: aptitude upgrade In short: I want all other packages to be in stable version, except gnupg.
Pinning all packages in unstable is easy. Just add Package: * Pin: release a=unstable Pin-Priority: 50 or similar to /etc/apt/preferences. This will hold back all packages in unstable from upgrade by apt or aptitude. Note that there is nothing magic about 50. From man apt_preferences: 0 < P < 100 causes a version to be installed only if there is no installed version of the package NOTE: I think this could be better expressed as: causes a version to be installed only if there is no installable version of higher priority available. I.e. if pkg is available in your default release, then the unstable version of pkg will not be installed by default. So any number in that range will work. To install a version from unstable in this case, you will either have to do apt-get install pkg/unstable pkg/dep1 pkg/dep2 ... in which case you will have to add additional dependencies manually (as shown, using dep1 and dep2 as examples) if they are not available in your current release version, or apt-get install -t unstable pkg which will automatically take dependencies from unstable, which you probably don't want do to in general. So, be careful with this latter command.
Choose software from Debian repository
1,541,010,146,000
This is what I get when running apt-get update: /root$ sudo apt-get update Get:1 http://security.debian.org jessie/updates InRelease [1,507 B] Get:2 http://cdn.debian.net jessie InRelease [1,507 B] Get:3 http://cdn.debian.net jessie-updates InRelease [1,507 B] Err http://security.debian.org jessie/updates InReleaset/lists/partial/security.debian.org_dists_jessie_updates_InRelease into data and signature failed Err http://cdn.debian.net jessie InReleasep /var/lib/apt/lists/partial/cdn.debian.net_debian_dists_jessie_InRelease into data and signature failed Err http://cdn.debian.net jessie-updates InReleaseib/apt/lists/partial/cdn.debian.net_debian_dists_jessie-updates_InRelease into data and signature failed Fetched 4,521 B in 1s (4,271 B/s) Reading package lists... Done W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: http://security.debian.org jessie/updates InRelease: Clearsigned file isn't valid, got 'NODATA' (does the network require authentication?) W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: http://cdn.debian.net jessie InRelease: Clearsigned file isn't valid, got 'NODATA' (does the network require authentication?) W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: http://cdn.debian.net jessie-updates InRelease: Clearsigned file isn't valid, got 'NODATA' (does the network require authentication?) W: Failed to fetch http://cdn.debian.net/debian/dists/jessie/InRelease W: Failed to fetch http://security.debian.org/dists/jessie/updates/InRelease W: Failed to fetch http://cdn.debian.net/debian/dists/jessie-updates/InRelease W: Some index files failed to download. They have been ignored, or old ones used instead. I was reading about this, but I'm still a little confused with how GPG keys work with repos. Each repository has a key associated with it that I need to have trusted on my machine before updates from that repo will be allowed right? So if this error is caused by having an out of date key how do I import the new key? Edit: I'm also curious why it says "Clearsigned file isn't valid". I thought Clearsign was only for wrapping text docs not tar balls- unless I read this wrong.
I wasn't sure whether or not I should just delete this question because it turned out to be the very common proxy problem. I was convinced this wasn't it but doing a wget -O- http://cdn.debian.net/debian/dists/jessie/InRelease as suggested returned a response from our proxy. I didn't think I was pointing to our proxy but i was!
Debian jessie GPG error Clearsigned file isn't valid
1,541,010,146,000
I'm trying to setup custom Ubuntu install CD using instructions on http://beezari.livejournal.com/191717.html I've added my GPG key to ubuntu-keyring and signed Release file with it (resulting in Release.gpg signature file) But during installation I get gpgv: Can't check signature public key not found It looks like to install new ubuntu-keyring with my key I need to have it already installed (in debian-installer maybe?). I can get around with debian-installer/allow_unauthenticated=false but it seems to be quite bad practice. Update This installer CD is supposed to be used for unattended installation using PXE
Looks like the problem has to do with the fact that I use initrd from netboot. And that initrd has /usr/share/keyrings in it. I've updated ubuntu-archive-keyring.gpg there and problem with signature seems to be solved. Though ubuntu-installer can not find my packages added to extras.
How do I add custom GPG key to Ubuntu/Debian installer?
1,716,209,189,000
I need to change the gpg key originally used for pass on my system to a newly generated key. However, when I follow the advice I found on this thread: https://unix.stackexchange.com/questions/226944/pass-and-gpg-no-public-key, things don't seem to work out as they should. The command used and its output while trying to replace the original gpg key with an alternate gpg key was: $ pass init -p .password-store GPG-id mkdir: created directory '/home/naphelge/.password-store/.password-store' Password store initialized for GPG-id (.password-store) [master 8d65cea] Set GPG id to GPG-id (.password-store). 1 file changed, 1 insertion(+), 1 deletion(-) So the command seems to just be making a new dir, .password-store in the original dir .password-store and creating a new .gpg-id file with my new key's GPG-id in it, and not proceeding to re-encrypt all of the gpg files in .password-store with the new gpg-key. The same advice is provided in this thread regarding a similar goal as well: https://askubuntu.com/questions/929307/how-to-change-the-gpg-key-of-the-pass-password-store I noticed that in the original .gpg-id file in the ~/.password-store dir that it is the original gpg-key's fingerprint (without spaces between the (10) 4 digit blocks) that is saved. So I did try the same command above, pass init -p .password-store FINGERPRINT-id, using the new key's fingerprint (without spaces), as well trying just specifying the email address associated with the key, pass init -p .password-store [email protected], to try and initiate the re-encryption of the gpg files in .password-store with the new gpg-key, but always with the same result. So I am not sure, looking at other posts and the pass man page what else to try to get this to work. Any suggestions or advice appreciated. Thks.
The issue can/was resolved using the GUI QTPass app. QTPass made it straightforward to add the second key, re-encrypting all files in the store with it, and then uncheck the original key.
how to change/add gpg key to pass
1,716,209,189,000
pass was recently removed on my Debian sid setup when gpg (namely dirmngr gpg gpg-agent gpg-wks-client gpgsm gpgconf gpgv) were upgraded to unstable version 2.2.40-3. At this time, reinstalling pass requires downgrading gpg to 2.2.40.1. Is there a clean way for me to downgrade without apt-get removing several other packages that I know should not be removed. Error log: #apt install pass Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: Unsatisfied dependencies: gnupg : Depends: dirmngr (< 2.2.40-1.1.1~) but 2.2.40-3 is to be installed Depends: gpg-agent (< 2.2.40-1.1.1~) but 2.2.40-3 is to be installed Depends: gpg-wks-client (< 2.2.40-1.1.1~) but 2.2.40-3 is to be installed Depends: gpg-wks-server (< 2.2.40-1.1.1~) but it is not going to be installed Depends: gpg-wks-server (>= 2.2.40-1.1) but it is not going to be installed Depends: gpgsm (< 2.2.40-1.1.1~) but 2.2.40-3 is to be installed Error: Unable to correct problems, you have held broken packages. apt policy gnupg shows gnupg: Installed: (none) Candidate: 2.2.40-1.1 Version table: 2.2.40-3 200 […] sid/main […] 2.2.40.1.1 500 […] bookworm/main […] I recently set testing to 500 pin-priority (with sid at 200) in /etc/apt/preferences to transition from sid -> testing as part of next Debian release.
The problem you’re running into comes from your switch to testing. It’s fine to do this, and thank you for helping test the next release; but usually it’s best done at a quieter time in the development process. To fix your immediate problem, the safest option is to install gnupg from unstable: sudo apt install -t sid gnupg After that, given the current state of testing and unstable, you’ll have to keep an eye open for similar issues when upgrading. Essentially, you’ll be manually replicating the work done by britney2 and the release team.
re-install pass on Debian sid
1,716,209,189,000
When I create a keypair with gpg, then it stores the secret key inside of ~/.gnupg/private-keys-v1.d It stores the public-key inside of a keyring-file - I can name it or it uses the default-location. If I have a look (--list-public-keys and --list-secret-keys) at my public and secret-keys I can see what pair matches. The 40 character string/hash in the output is the same for both. The file of the secret-key is different to this string. It is also 40 chars long, but different. How do I find out what secret-key file matches my public-key?? Using gpg 2.2.40 on Debian 12.
Use gpg --list-secret-keys --with-keygrip. This path stores private keys for several different protocols (PGP, SSH, S/MIME), so it cannot use the PGP fingerprint; instead the 40-character name is the hash of the raw public key (as in, not including the PGP certificate metadata) in libgcrypt s-exp format.
GPG: find secret-keyfile that matches my public-key
1,716,209,189,000
Just upgraded my Buster server to Debian 11 Bullseye. However, gpg is now broken. apt update fails with Unknown error executing apt-key apt-key --list fails with gpg: symbol lookup error: gpg: undefined symbol: gpgrt_set_confdir, version GPG_ERROR_1.0 any attempt to run gpg fails with the above error. GPG is version 2.2.27-2+deb11u2 from Debian Bullseye official repository. Similar errors found online are about building GPG from source, however, I am not doing so, I just installed the binary from Debian repositories. I tried to reinstall GPG-related packages with no effect. I plan to upgrade to Debian 12 when this will be solved, obviously I can't do it now since package signatures can't be verified. Output of ldd /usr/bin/gpg : linux-vdso.so.1 (0x00007ffdd2fbb000) libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007fc36ee46000) libbz2.so.1.0 => /lib/x86_64-linux-gnu/libbz2.so.1.0 (0x00007fc36ee33000) libsqlite3.so.0 => /lib/x86_64-linux-gnu/libsqlite3.so.0 (0x00007fc36ecf0000) libgcrypt.so.20 => /lib/x86_64-linux-gnu/libgcrypt.so.20 (0x00007fc36ebd0000) libreadline.so.8 => /usr/local/lib/libreadline.so.8 (0x00007fc36eb79000) libassuan.so.0 => /usr/local/lib/libassuan.so.0 (0x00007fc36eb65000) libgpg-error.so.0 => /usr/local/lib/libgpg-error.so.0 (0x00007fc36eb41000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fc36e96d000) libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007fc36e829000) libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007fc36e807000) libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007fc36e801000) libtinfo.so.6 => /lib/x86_64-linux-gnu/libtinfo.so.6 (0x00007fc36e7d2000) /lib64/ld-linux-x86-64.so.2 (0x00007fc36ef7e000)
As you discovered, the error is caused by one of these libraries: libreadline.so.8 => /usr/local/lib/libreadline.so.8 (0x00007fc36eb79000) libassuan.so.0 => /usr/local/lib/libassuan.so.0 (0x00007fc36eb65000) libgpg-error.so.0 => /usr/local/lib/libgpg-error.so.0 (0x00007fc36eb41000) To fix this, you should delete the old libraries in /usr/local/lib. While you’re at it, you might want to check other files in /usr/local/bin and /usr/local/lib; there might well be more that are older than their equivalents in Debian 11 and 12.
GPG broken after debian 10 -> 11 upgrade
1,716,209,189,000
I have just restored my data from a borg backup repository after a computer failure. My ~/.gnupg folder seems fine, private keys are there, and permissions look correct. Usually borg does a good job at this. When I cat the private keys files, there is no sign of data corruption. However I cannot list or use the private keys. I can only interact with a public key that was imported by pacman during the installation process. I came across a few posts with similar issues of private keys not recognised after being copied from one machine to another, and usually the problem gets solved following recommendations of repeating the transfer using gpg —export-key and then importing them with the opposite command. Unfortunately I do not have that luxury since my old machine has crashed and is not recoverable. I was aware of importing and exporting keys, but I always assumed it was just a safe procedure to move keys around. So I have two questions: Do I have to review my backup scripts and use the export command for my gpg secret keys? Can I recover my secret keys nonetheless? ======= Edit ====== I found a post on arch forum with a user having exactly the same problem I have, so I got more leads to follow. What I tried so far: gpg --version > gpg (GnuPG) 2.4.5 ps aux | grep gpg-a > /usr/bin/gpg-agent --supervised #About ownership: chown -R $USER:$USER .gnupg # Checking UID/GID with show a result of `1000` both on filesystem # # and backup archive. ls -vn .gnupg/private-keys-v1.d gpg --export-secret-keys > gpg: WARNING: nothing exported gpg -v --list-secret-keys > gpg: enabled compatibility flags: > gpg: using pgp trust model # Most relevant results of: strace -f -o /tmp/gpg.strace gpg --list-secret-keys cat /tmp/gpg.strace > 76220 access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory) > 76220 access("/home/$USER/.gnupg/secring.gpg", F_OK) = -1 ENOENT (No such file or directory) # Listing public keys gpg --list-keys > Returns one public key imported by pacman during install gpg -K > Returns nothing ls -ln .gnupg/ .rw-r----- 12 1000 13 Mar 10:45 common.conf drwx------ - 1000 6 Sep 2023 crls.d .rw------- 2.0k 1000 13 Sep 2023 gpg-agent.conf .rw------- 703 1000 5 Mar 18:48 gpg.conf drwx------ - 1000 30 Aug 2023 private-keys-v1.d drwxr-x--- - 1000 14 Mar 19:27 public-keys.d .rw-r--r-- 0 1000 24 Aug 2023 pubring.gpg .rw-r--r-- 7.7k 1000 5 Mar 12:13 pubring.kbx .rw-r--r-- 7.0k 1000 7 Sep 2023 pubring.kbx~ .rw------- 600 1000 8 Mar 07:20 random_seed .rw-r----- 676 1000 30 Aug 2023 sshcontrol .rw------- 1.6k 1000 2 Sep 2023 trustdb.gpg
I am trying a garuda kde install at the moment, and it turns out it comes with some default gpg settings I have not yet completely figured out. Anyway, it turns out some process is constantly recreating a basic .gnupg folder. It does not look like it is to blame on gpg agent since I was able to disable all gpg related systemd services (mainly gpg-agent.service and gpg-agent.socket) as both user and root and then killall gpg-agent. What was happening is that I would never actually restore my own gpg folder from backups, but merging it with an automatically created and for some reasons it would not work. Finally just to make sure I found the culprit, I could rm -rf the folder and restore it from backup using a script so that whatever process xas recreating it would not have the time to do so. It has been working fine since.
Recover poorly backed up gpg secret keys
1,716,209,189,000
How do I change the passphrase of an SSH key that is stored in the gpg-agent?
First of all, if you want to change the passphrase for a key that you use for a longer time, consider replacing the key entirely. Usually it's not a big effort and key rotation is a good routine. Now to the question: While you can change the passphrase of the ssh-key stored in you ~/.ssh directory with ssh-keygen -p -f ~/.ssh/id_ed25519, I think the gpg-agent makes a copy. To change the passphrase of that copy, run gpg-connect-agent Inside of this prompt, run keyinfo --ssh-list --ssh-fpr. This will list your keys: S KEYINFO 077241111111111111111111111111111119D506 D - - 1 P MD5:36:a3:62:11:11:11:11:11:11:11:11:11:11:1f:45:6e - S OK Copy the id after the KEYINFO of the key you want to change the password for and run passwd <id> (so passwd 077241111111111111111111111111111119D506 in the example above) and follow the process Leave the agent prompt by typing /bye
Change passphrase of SSH key stored in the gpg-agent
1,716,209,189,000
GPG decrypt command I'm try to decrypt a file by gpg and to do this I'm executing (with success) the following command: gpg --passphrase "12345678" --batch --yes --no-symkey-cache filename.tar.gz.gpg The result of the execution of the command is: the file filename.tar.gz.gpg is correctly decrypted and is created the file filename.tar.gz; by the options --passphrase "12345678" --batch --yes the GUI for the insertion of the passphrase is not open. GPG Warning and --quiet option But there is a problem: the gpg execution of the previous command produces the following warning: gpg: WARNING: no command supplied. Trying to guess what you mean ... and the correct output: gpg: AES256 encrypted data gpg: encrypted with 1 passphrase By this post, I know that by the option --quiet, the previous gpg command doesn't produce any output message. The warning seems tell that in my command is not present the request to decrypt the file filename.tar.gz.gpg. My question So my question is: Is there a way to tell to gpg that it must decrypt a file to avoid the warning no command supplied.?
Solution: option --output and command --decrypt Thanks to @GracefulRestart and to this post (which explains me how to set the target of the output of gpg), I have finally found the command suited for my needs: gpg --passphrase "12345678" --batch --yes --output filename.tar.gz --no-symkey-cache --decrypt filename.tar.gz.gpg In the command I have added 2 options: --output filename.tar.gz which allows writing the output of the command gpg to the file filename.tar.gz the command --decrypt filename.tar.gz.gpg as suggested by @GracefulRestart to specify to gpg the command to execute With this modifications and without --quiet the warning is not present.
Find an option other than --quiet to avoid GPG warning when a file is decrypted
1,716,209,189,000
Environment OS is Artix Linux 6.0.11 GPG is 2.2.40 libcrypt is 1.10.2 keyserver is any (ubuntu , sks, mit, etc.) Problem I wanted to update my system via pacman -Syu and needed to import a key by Torsten Kessler, David Runge and others, whose keys "could not be looked up remotely". OK, gpg --recv-keys it is then! But alas, woe is me as GPG just waits, and waits, and waits until it times out and says "server indicated a failure", here's the output: [user@localhost ~]$ dirmngr --daemon --debug-all --standard-resolver & gpg --debug-level 7 --keyserver hkp://keyserver.kjsl.com:80 --recv-keys ED587B6247A4152D [1] 20203 gpg: enabled debug flags: packet filter cache memstat trust extprog dirmngr[20203]: reading options from '/home/user/.gnupg/dirmngr.conf' dirmngr[20203]: reading options from '[cmdline]' dirmngr[20203]: enabled debug flags: x509 crypto memory cache memstat hashing ipc dns network lookup extprog dirmngr[20203]: listening on socket '/run/user/1000/gnupg/S.dirmngr' DIRMNGR_INFO=/run/user/1000/gnupg/S.dirmngr:20206:1; export DIRMNGR_INFO; dirmngr[20206.0]: error loading certificate '/etc/ssl/certs/ca-certificates.crt': Certificate expired dirmngr[20206.0]: error loading certificate '/etc/ssl/certs/ca-certificates.crt': Certificate expired dirmngr[20206.0]: permanently loaded certificates: 141 dirmngr[20206.0]: runtime cached certificates: 0 dirmngr[20206.0]: trusted certificates: 141 (141,0,0,0) gpg: keyserver receive failed: Server indicated a failure gpg: keydb: handles=0 locks=0 parse=0 get=0 gpg: build=0 update=0 insert=0 delete=0 gpg: reset=0 found=0 not=0 cache=0 not=0 gpg: kid_not_found_cache: count=0 peak=0 flushes=0 gpg: sig_cache: total=0 cached=0 good=0 bad=0 gpg: random usage: poolsize=600 mixed=0 polls=0/0 added=0/0 outmix=0 getlvl1=0/0 getlvl2=0/0 gpg: rndjent stat: collector=0x0000000000000000 calls=0 bytes=0 gpg: secmem usage: 0/32768 bytes in 0 blocks dirmngr[20206.0]: socket file has been removed - shutting down dirmngr[20206.0]: dirmngr (GnuPG) 2.2.40 stopped At first I thought that it was NetworkManager (it has done so before and is a general hinderance to me), so I uninstalled it - it wasn't the problem. dhcpcd also wasn't the problem; /etc/resolv.conf looks like: # Generated by dhcpcd from eth0.dhcp nameserver 9.9.9.9 nameserver 192.168.1.1 # /etc/resolv.conf.tail can replace this line Note: /etc/resolv.conf.head contains nameserver 9.9.9.9 I can't ping the servers, though nslookup and web browser work just fine, so I'm at a loss. It isn't a DNS thing, nslookup says so, ping doesn't work though. Is there anything obvious for me to debug I've overlooked in my blind sighted folly? Apendix I: Manually importing ~3 keys from keyserver.ubuntu.com results in marginal/unknown trust and pacman considers the cached packages to be corrupt (I got tired of confirming the provider selection between galaxy & extra and ran with --noconfirm, deleting 600Mb of cached valid packages) I am no closer to finding out why gpg can't connect to the server, I will try a proxy, though I doubt it will help
Remember to keep your keyrings and mirrorlists up-to-date, and join your repo's mailinglist for install breaking changes like adding a repository and moving several packages there. Artix had (now quite some time ago) added the [universe] repo and moved several arch packages there, the most important one being the archlinux-keyring I had noticed that I only had the lib32 keyring installed, but never bothered to check why blindly assuming that Artix had moved sufficiently far away from Arch that it could maintain it's own versions of packages... I've now enabled the [universe] repo, pacman -Syu'd and pacman-key {--init,--populate,--refresh-keys}'d fixing my problems
gpg cannot resove/connect to keyserver
1,716,209,189,000
I recently learned about pass git integration, which allows to sync my passwords with a remote git repo. Which I instantly didn't hesitate to configure. So then I decided to clone this repo on another computer (with another GPG key installed) and try to access the passwords. It however complained: pass myaccount gpg: decryption failed: no secret key I guess that comes from the fact that I have another GPG key installed, not the one I encrypted my passwords with (I use the same GPG key ID though). So, how do I access those passwords without transferring private GPG keys from the original machine to this one? I of course know the passphrase, and can transfer public ones if needed. Or am I required to copy over the whole keyring? Thoughts... Now after writing this post, I think I'm closer to understanding how it works. So basically the keys I can find in ~/.gnupg aren't just keys - they are encrypted keys. Encrypted with the passphrase. And so it's relatively safe to copy them to another machine. Is it correct?
I still haven't tried it myself, but pass should be able to encrypt passwords for multiple GPG keys, so transfer the public key from the second computer to the first, and run pass init <id-of-GPG-key-1> <id-of-GPG-key-2>, then pass should encrypt all passwords in your store that aren't encrypted for those two keys for them, and (when synced, that's something git is good for) then they should be useable on both computers. If you can't remember the id of the key that passwords are currently encrypted for, you can look in <password-store>/.gpgid.
How do I access my passwords from `pass` from another computer without transferring private GPG keys?
1,716,209,189,000
I am using GPG Version 2.2.20 and whenever I run the following command while signing the release file, I am prompted for the passphrase. gpg --default-key <my_email> --clearsign -o - Release > InRelease I want to avoid getting prompts and pass the passphrase directly in the command. After reading a few answers for other questions, I tried these commands: gpg --default-key <my_email> --passphrase <my_passphrase> --clearsign -o - Release > InRelease gpg --default-key <my_email> --batch --passphrase <my_passphrase> --clearsign -o - Release > InRelease But the problem is still the same, it is prompting me for the passphrase instead of taking directly from the given command. How do I pass the passphrase correctly to the command?
For GnuPG version 2.1 and later you must include the option --pinentry-mode loopback in order for the --passphrase option to work.
How to avoid prompts for passphrase while clearsigning a file?
1,716,209,189,000
Is there a way to generate the default gpg.conf file? I can't find one in my fs with find / -name gpg.conf. I also tried checking gpgconf to see if it had an option for generating one, and looked for other gpg utils that come with the standard gnupg2 installation, but nothing stood out.
Note that the contents of gpg.conf are only used to set options that are not the default. So if you want the default options, there's no reason to have a gpg.conf file. That said, the correct placement for gpg.conf is in the .gnupg folder and it is easily created with touch ~/.gnupg/gpg.conf.
Generating default gpg.conf with default gnupg2 utils
1,614,473,006,000
I have a Debian buster system where I am logged in to the local GUI and also logged in over ssh. I need to sign something with gnupg over ssh. Unfortunately I get no prompt for a passphrase on my ssh session, I suspect the prompt is being shown graphically in the GUI, but since I'm not in front of the machine right now I can't check. In the past I have achived this through killing the gpg agent and starting one manually, but that doesn't seem to work anymore. The agent tells me it is already running. From some searching it seems that a systemd user service may be responsible for this.
I was able to work around this issue by creating a symlink to my gnupg home directory with. ln -s .gnupg .gnupg_ I was then able to start a gpg agent manually in the symlinked gnupg home with GNUPGHOME=$HOME/.gnupg_ gpg-agent --pinentry-program /usr/bin/pinentry-curses --daemon bash And within that session I was able to use gpg commands and succesfully get a passphrase prompt.
How to use gnupg remotely on buster while also logged into the local GUI
1,614,473,006,000
I installed GnuPG and KGPG. I created a text doc whereby I wrote down nothing except the other person's public key configuration. There is a tab for importing public keys. However, when I select the file for import, it does not import it. When trying it manually, without KGPG, I get the error message: no valid KGPG data. Probably the same thing with exporting data. Am I supposed to be adding their public key to a "file" in a different way? I name the file xyz.key and I get a special looking icon. UPDATE: I seem to have figured it out, at least in-part. The main thing was first using the "export public key" tab and saving it using the clipboard (not file) and then transferring via copy/paste to a doc. This saved the public keys in an encrypted form. Then when the person receives this, they use the import tab and the encrypted data of the public key gets imported correctly. The "no valid KGPG data" message I was getting before was because I was trying to import & export non-encrypted key data. The fact that I am unable to fully get the entire gpg encryption thing to work makes me wonder if I did it correctly. Thanks.
The main thing was first using the "export public key" tab and saving it using the clipboard (not file) and then transferring via copy/paste to a doc. This saved the public keys in an encrypted form. Then when the person receives this, they use the import tab and the encrypted data of the public key gets imported correctly. The "no valid KGPG data" message I was getting before was because I was trying to import & export non-encrypted key data. The fact that I am unable to fully get the entire gpg encryption thing to work makes me wonder if I did it correctly.
How do I import public keys when using KGPG?
1,614,473,006,000
I'm trying to install docker on my Raspberry Pi 4b (raspian 10 buster), but get this error on running the installation command as listed on the official website: ~ > sudo curl -sSL https://get.docker.com | sh # Executing docker install script, commit: f45d7c11389849ff46a6b4d94e0dd1ffebca32c1 + sudo -E sh -c apt-get update -qq >/dev/null W: GPG error: http://ftp.utexas.edu/mariadb/repo/10.1/debian jessie InRelease: The following signatures were invalid: 199369E5404BD5FC7D2FE43BCBCB082A1BB943DB E: The repository 'http://ftp.utexas.edu/mariadb/repo/10.1/debian jessie InRelease' is not signed. This is the first time I got this error, and different to similar questions here it doesn apear with apt update/apt upgrade. The error hints at a problem with mariadb, but, although I had some problems installing it, it is now running without any problems on version 10.3 Any ideas on how I can fix the invalid signatures?
I had some issues installing Raspberry Pi as well, the one that helped me the most was the one from Docker Wiki: https://docs.docker.com/install/linux/docker-ce/debian/. However, your issue seems to be lying with the repo itself. In the URL you can see that it's a repository for Jessie and not for Buster. You should check your /etc/apt/sources.list and all the contents in /etc/apt/sources.list.d/, maybe even resetting them to the default.
GPG error: "The following signatures were invalid" on Docker installation with cURL
1,614,473,006,000
I am looking for a simple, open-source password manager for linux with a CLI. It has to have a way of retrieving a password via the command line, so I can use it in several scripts (that sync my email for example). I came across pass (https://www.passwordstore.org/). It looks very promising and exactly like the program I was looking for, however there is one thing I can't figure out. Using pass git init and pass git push, I can synchronize the passwords to an external git repository. However: this is not enough to use the passwords on a different machine, because the gpg keys are not synchronised. How can I synchronize the gpg keys/pass passwords in a safe way? I found this question: synchronising gnupg and pass but is doesn't really answer my question. It just says "don't put your gpg keys on the web".
In the end, I gave up on trying to make this work and used KeePassXC. Then, in order to obtain a password from KeePass using the command line, I use: gpg2 --use-agent --output - -q passphrase.gpg | keepassxc-cli show -q -a Password passwords.kdbx the_secret_password_i_am_looking_for The passphrase.gpg file contains the KeePass password and is encrypted using a symmetric key, meaning that it only needs a passphrase to unlock it. In my gpg-agent.conf file, I put the following contents: max-cache-ttl 60480000 default-cache-ttl 60480000 display :0 This effectively remembers the passphrase until the end of the session. I hope it is of use to someone. Edit: the synchronization part is done by syncing the KeePass database using Dropbox.
Synchronise passwords across devices with pass
1,614,473,006,000
Hay all i am trying to build an openvas docker container and haveing some issues with openvasmd. I now have it 80% working, but stuck with something, that will cause me issues, when i try to add auto updating and so on. i got three sessions open Session A: Where i ran all the commands to setup openvas Session B: monitoring the openvasmd.log Session C: Other things If i run the command "openvasmd --rebuild" on "Session A" "Session B" outputs "Updating NVT cache." If i run the command "openvasmd --rebuild" on "Session C" "Session B" outputs "error starting search for OpenPGP key 'OpenVAS Credential Encryption': Inappropriate ioctl for device " From what i can work out "session A" has openvas GPG setup to auth, but "session c" does not. My question is, how do i get GPG info working on "session c" as well. This has been driving me nuts all day and this is where i got to. All / Nay help would be welcome and gratefull
found it out :D ASked the IRC chat and some on came back wiht the below link https://lists.wald.intevation.org/pipermail/openvas-discuss/2017-January/010592.html Turns out, to be an know bug in an lib file with debian
GPG openvas9 debian
1,614,473,006,000
How can I check that the "enabled=1" repositories have "gpgcheck=1" in the "/etc/yum.repos.d/" directory (and the "gpgkey=" file exist or not?)? Q: I'm searching for a solution to do this (oneliner?), I mean to list all the "enabled=1" repositories and "gpgcheck" status, and does the "gpgkey=XXX" file exist or not? - the problem is that that one .repo file could contain several repositories! p.s.: afaik the "gpgkey=XXX" -> XXX could only be a local file, not an url through http... but fix me if I'm wrong..
Can't think of any kind of simple way to do this. Whipped the following together, though it's not particularly elegant. sed '/^\[.*\]/s,\[,\n<<[,' /etc/yum.repos.d/*.repo | awk ' BEGIN { RS = "\n<<" } /\nenabled=1/ && !/\ngpgcheck=1/ { print $1"~gpgcheck disabled for enabled repo" } /\nenabled=1/ && /\ngpgcheck=1/ && !/\ngpgkey=/ { print $1"~gpgcheck enabled for enabled repo, but missing gpgkey setting" } /\nenabled=1/ && /\ngpgcheck=1/ && /\ngpgkey=/ { print $1"~gpgcheck enabled for enabled repo, see output below for gpgkey file status" } ' | column -ts~ printf "\nChecking gpg keys for enabled repos\n" sed '/^\[.*\]/s,\[,\n<<[,' /etc/yum.repos.d/*.repo | awk ' BEGIN { RS = "\n<<" } /\nenabled=1/ && /\ngpgcheck=1/ && /\ngpgkey=/ ' | awk -F: /^gpgkey=/{print\$2} | xargs ls -l Drop it into a script and you're off.
How to check that gpg checking is correct on RHEL based machines or not?
1,614,473,006,000
I'm using kali wsl(version 1) on Windows 10, and I installed it on non-C drive with this method here. This is the return of uname -r: 4.4.0-19041-Microsoft I got this error while apt update: user@host:~$ sudo apt update [sudo] password for user: Get:1 <mirror_site> kali-rolling InRelease [30.5 kB] Err:1 <mirror_site> kali-rolling InRelease The following signatures were invalid: EXPKEYSIG ED444FF07D8D0BF6 Kali Linux Repository <[email protected]> Fetched 30.5 kB in 2s (12.5 kB/s) Reading package lists... Done Building dependency tree Reading state information... Done All packages are up to date. W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: <mirror_site> kali-rolling InRelease: The following signatures were invalid: EXPKEYSIG ED444FF07D8D0BF6 Kali Linux Repository <[email protected]> W: Failed to fetch http://http.kali.org/kali/dists/kali-rolling/InRelease The following signatures were invalid: EXPKEYSIG ED444FF07D8D0BF6 Kali Linux Repository <[email protected]> W: Some index files failed to download. They have been ignored, or old ones used instead. Then I tried fix this with gpg --keyserver hkp://keys.gnupg.net --recv-key 7D8D0BF6, got this error: E: gnupg, gnupg2 and gnupg1 do not seem to be installed, but one of them is required for this operation So, I tried install gnupg_2.2.27-2_all.deb manually from Debian Package. However, there's more dependency problems appeared. user@host:~$ sudo dpkg -i gnupg_2.2.27-2_all.deb (Reading database ... 17159 files and directories currently installed.) Preparing to unpack gnupg_2.2.27-2_all.deb ... Unpacking gnupg (2.2.27-2) over (2.2.27-2) ... dpkg: dependency problems prevent configuration of gnupg: gnupg depends on dirmngr (<< 2.2.27-2.1~); however: Package dirmngr is not installed. gnupg depends on dirmngr (>= 2.2.27-2); however: Package dirmngr is not installed. gnupg depends on gnupg-l10n (= 2.2.27-2); however: Package gnupg-l10n is not installed. gnupg depends on gnupg-utils (<< 2.2.27-2.1~); however: Package gnupg-utils is not installed. gnupg depends on gnupg-utils (>= 2.2.27-2); however: Package gnupg-utils is not installed. gnupg depends on gpg (<< 2.2.27-2.1~); however: Package gpg is not installed. gnupg depends on gpg (>= 2.2.27-2); however: Package gpg is not installed. gnupg depends on gpg-agent (<< 2.2.27-2.1~); however: Package gpg-agent is not installed. gnupg depends on gpg-agent (>= 2.2.27-2); however: Package gpg-agent is not installed. gnupg depends on gpg-wks-client (<< 2.2.27-2.1~); however: Package gpg-wks-client is not installed. gnupg depends on gpg-wks-client (>= 2.2.27-2); however: Package gpg-wks-client is not installed. gnupg depends on gpg-wks-server (<< 2.2.27-2.1~); however: Package gpg-wks-server is not installed. gnupg depends on gpg-wks-server (>= 2.2.27-2); however: Package gpg-wks-server is not installed. gnupg depends on gpgsm (<< 2.2.27-2.1~); however: Package gpgsm is not installed. gnupg depends on gpgsm (>= 2.2.27-2); however: Package gpgsm is not installed. gnupg depends on gpgv (>= 2.2.27-2); however: Version of gpgv on system is 2.2.12-1. dpkg: error processing package gnupg (--install): dependency problems - leaving unconfigured Errors were encountered while processing: gnupg I don't know what to do next after this. do I really need to download and install all these dependency manually? What I gotta do to make apt works?
This appears to be due to an outdated link in the Microsoft docs for installing distributions manually. The Kali package linked to there is 2019.2, and I can reproduce the problem you are experiencing with that (outdated) package. There's certainly a later Kali WSL package available, because the version installed from the Microsoft Store is 2021.2. Unfortunately, I do not know the direct link to that package. I've submitted this as an issue on the MicrosoftDocs\WSL Github. The problem is further compounded by the fact that Kali provided for WSL is a very minimal distribution and doesn't include some of the tools, such as GPG, that would be provided in a "normal" distribution. So the instructions you would normally follow for updating the keys don't work on WSL. Perhaps there's a way to get gpg installed with all its dependencies without a working apt, but in my opinion it's not worth it. Instead, let me propose an alternative installation method, with the understanding that you need to install on a drive other than C:. That said, you're going to need to temporarily dedicate (or free up) a little under 1GB on the C: drive during this process. We can free it up at the end. Please don't let the length of the instructions below scare you. I tend to be overly detailed to make sure you understand what is going on, and to try to make it as failproof as possible. I have tested the entire process below personally, but let me know if you run into any trouble. First, remove the existing Kali installation. I'm assuming that you don't have any critical files there, since I'm guessing you recently installed. But if you do, move them out of the WSL instance. Then, from PowerShell or CMD: wsl --unregister kali-linux Also delete the (old, outdated) package files you downloaded previously. Next, go to the Microsoft Store and install Kali. This is going to install the package files under the protected C:\Program Files\WindowsApps\ directory, which is why we temporarily need space on that drive. Assuming you are running Windows Terminal, run Start-Process wt -Verb RunAs in PowerShell to get an admin prompt. If not Windows Terminal, run PowerShell as Admin. In the Admin PowerShell, run Get-ChildItem -Recurse 'C:\Program Files\WindowsApps\Kali*' | Where-Object {$_.Name -eq 'install.tar.gz' } | % { $_.DirectoryName } | Set-Clipboard to get the directory where the Kali package was installed. Confirm that it found the right path via Get-Clipboard. It should return something like C:\Program Files\WindowsApps\KaliLinux.54290C8133FEE_1.8.0.0_x64__ey8k8hqnwqnmg. This path may change in the future (if you are reading this answer later), but that's okay. Exit the Admin Shell and return to your "normal user" PowerShell: mkdir D:\wsl\instances\kali-linux cd D:\wsl\instances\kali-linux Get-Clipboard # Confirm that the Kali package path is still on the clipboard Copy-Item "$(Get-Clipboard)\*" This will copy the package files over to the D: drive. Of course, you can set the installation path however you want. I personally use a wsl\instances\distro-name format since I keep multiple distributions and instances around. I recommend it for future-proofing in case you want more copies later. You should now have a number of files in the destination directory. Something like: Name ---- AppxBlockMap.xml AppxManifest.xml AppxMetadata/ AppxSignature.p7x Assets/ install.tar.gz kali.exe resources.pri From here, you should be back in familiar territory. Just run .\kali.exe, wait a few moments, set your username and password, and WSL will launch into Kali. Exit Kali and (back in PowerShell) remove the package files from that directory with Remove-Item Appx*, Assets, install.tar.gz, resources.pri. For WSL1, you should end up with: A rootfs directory, where the filesystem is stored. IMPORTANT: Do NOT access this for any reason from Windows/PowerShell/CMD/Notepad/Any other Windows App, etc. There be dragons. A temp directory fsserver And the kali.exe command that we copied over (but didn't delete). It's not all that useful, IMHO, but you might want it. It can be used to launch Kali, but the wsl command is much better (more features, better supported). Uninstall Kali from the Microsoft Store to reclaim your space on the C: drive. Your installation should be the latest and greatest available from Kali in the Microsoft Store. You can check the Kali release with cat /etc/os-release. At present, this is 2021.2. sudo apt update && sudo apt upgrade should now work just fine ...
How to apt update for kali on a manually installed wsl?
1,614,473,006,000
I want to be able to encrypt and decrypt a simple file. I followed this tutorial to generating an OpenPGP Key, it gets stuck at this step You will be asked to tap on the keyboard (or do any of the things you normally do) in order for randomization to take place. And are these the right commands to encrypt and decrypt the file ? alice% gpg --output doc.gpg --encrypt --recipient blake% gpg --output doc --decrypt doc.gpg
Generation of new OpenPGP key pairs with GnuPG requires quite a lot of entropy, and thus key generation can take some time. Do some work while waiting to help the kernel provide more random bits, in case of virtual machines which often suffer from low entropy consider using software like haveged. The commands seem reasonable apart that --recipient requires an argument (it is used to define the recipient, provide a key ID or mail address). Generally, GnuPG should always have options precede commands -- the difference is not always easy to spot (all are prefixed with dashes), but options explain how to do something, while commands define what GnuPG should do (encrypt, sign, decrypt, create keys, ...). Finally, you missed providing some input (there are different ways to do so). So your first command should rather read: gpg --output doc.gpg --recipient <key-definition> --encrypt < message.txt
Encryption & decryption with PGP
1,614,473,006,000
Getting this error on running following command sudo apt-get update Error: GPG error: http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/grizzly Release: Detached signature file '/var/lib/apt/lists/partial/ubuntu-cloud.archive.canonical.com_ubuntu_dists_precise-updates_grizzly_Release.gpg' is in unsupported binary format
You are trying to install to Kali some updates intended for a version of Ubuntu (12.04 LTS, Precise Pangolin) that reached End-of-Life at the end of April 2017. The particular Release file apt-get is accessing is from year 2014, and its GPG signature file from year 2016. See for yourself: http://ubuntu-cloud.archive.canonical.com/ubuntu/dists/precise-updates/grizzly/ If your version of Kali is otherwise up to date, the contents of this repository should be way outdated for it. Why on earth would you expect that to work at all??? Nevertheless, it looks like the Release.gpg file of that repository on your local disk may have been corrupted on download. Have you tried running sudo apt-get clean and trying again? If you want to get rid of that repository, check your /etc/apt/sources.list file and any *.list files in /etc/apt/sources.list.d/ directory. Find a line that looks like this: deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/grizzly [...there may be something else here...] and remove or comment it out. Then run sudo apt-get clean, and then try updating again.
GPG: Unsupported Binary Format
1,614,473,006,000
How can I install GnuPG on my CentOS 7 system? I want to use GnuPG alongside Thunderbird and Enigmail to manage pgp keys, as per the instructions in this link. The problem is that the download instructions I find for linux all have to do with Debian. Here is an example. EDIT typing which gpg resulted in /usr/bin/gpg, however, it is not clear that this is the same aspect of GnuGPG that needs to be integrated with Thunderbird and Enigmail to manage gpg keys. Before this question can be considered answered, I need to know that GnuPG is installed in a way that can run with Thunderbird and Enigmail. Thus, the answer would give instructions for checking status, and instructions for downloading if it is not properly installed yet. I imagine this might only take several lines of actual methods.
install Enigmail Thunderbird > Add-ons Menu > Enigmail If you do not already have a PGP key, generate one: gpg --gen-key (follow its prompts to complete the process; default values are generally fine) 3. Restart Thunderbird. Enigmail will probably auto-detect the presence of your GnuPG keychain and use it. If it does not, point it to your GnuPG dir: Thunderbird > Enigmail > Key Management In the Key Management window, select File > Import Keys from File and show Enigmail to your /home/$USER/.gnupg directory. Import your key; ignore errors from Enigmail that it already knew about your key. You should now see your key listed in the Key Management window. 5. Email somebody!
installing GnuGPG with Thunderbird on CentOS 7
1,336,906,003,000
I installed CUDA toolkit on my computer and started BOINC project on GPU. In BOINC I can see that it is running on GPU, but is there a tool that can show me more details about that what is running on GPU - GPU usage and memory usage?
For Nvidia GPUs there is a tool nvidia-smi that can show memory usage, GPU utilization and temperature of GPU. There also is a list of compute processes and few more options but my graphic card (GeForce 9600 GT) is not fully supported. Sun May 13 20:02:49 2012 +------------------------------------------------------+ | NVIDIA-SMI 3.295.40 Driver Version: 295.40 | |-------------------------------+----------------------+----------------------+ | Nb. Name | Bus Id Disp. | Volatile ECC SB / DB | | Fan Temp Power Usage /Cap | Memory Usage | GPU Util. Compute M. | |===============================+======================+======================| | 0. GeForce 9600 GT | 0000:01:00.0 N/A | N/A N/A | | 0% 51 C N/A N/A / N/A | 90% 459MB / 511MB | N/A Default | |-------------------------------+----------------------+----------------------| | Compute processes: GPU Memory | | GPU PID Process name Usage | |=============================================================================| | 0. Not Supported | +-----------------------------------------------------------------------------+
GPU usage monitoring (CUDA)
1,336,906,003,000
How can I verify whether hardware acceleration is available and whether it is enabled for my video card.
If you don't already have it, install glxinfo; in APT it's part of mesa-utils: apt-get install mesa-utils Run glxinfo and look for a line about direct rendering (another term for hardware acceleration): > glxinfo | grep "direct rendering" direct rendering: Yes If it says "Yes", hardware acceleration is enabled
How to verify if hardware acceleration is enabled?
1,336,906,003,000
How to install Cuda Toolkit 7.0 or 8 on Debian 8? I know that Debian 8 comes with the option to download and install CUDA Toolkit 6.0 using apt-get install nvidia-cuda-toolkit, but how do you do this for CUDA toolkit version 7.0 or 8? I tried installing using the Ubuntu installers, as described below: sudo wget http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1404/x86_64/cuda-repo-ubuntu1404_7.0-28_amd64.deb dpkg -i cuda-repo-ubuntu1404_7.0-28_amd64.deb sudo apt-get update sudo apt-get install -y cuda However it did not work and the following message was returned: Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: cuda : Depends: cuda-7-0 (= 7.0-28) but it is not going to be installed E: Unable to correct problems, you have held broken packages.
The following instructions are valid for CUDA 7.0, 7.5, and several previous (and probably later) versions. As far as Debian distributions, they're valid for Jessie and Stretch and probably other versions. They assume an amd64 (x86_64) architecture, but you can easily adapt them for x86 (x86_32). Installation prerequisites g++ - You should use the newest GCC version supported by your version of CUDA. For CUDA 7.x this would be version 4.9.3, last of the 4.x line; for CUDA 8.0, GCC 5.x versions are supported. If your distribution uses GCC 5.x by default, use that, otherwise GCC 5.4.0 should do. Earlier versions are usable but I wouldn't recommend them, if only for the better modern-C++ feature support for host-side code. gcc - comes with g++. I even think CMake might default to having nvcc invoke gcc rather than g++ in some cases with a -x switch (but not sure about this). libGLU - Mesa OpenGL libraries (+ development files?) libXi - X Window System Xinput extension libraries (+ development files?) libXmu - X Window System "miscellaneous utilities" library (+ development files?) Linux kernel - headers for the kernel version you're running. If you want a list of specific packages - well, that depends on exactly which distribution you're using. But you can try the following (for CUDA 7.x): sudo apt-get install gcc g++ gcc-4.9 g++-4.9 libxi libxi6 libxi-dev libglu1-mesa libglu1-mesa-dev libxmu6 libxmu6-dev linux-headers-amd64 linux-source And you might add some -dbg versions of those packages for debugging symbols. I'm pretty sure this covers it all - but I might have missed something I just had installed already. Also, CUDA can work with clang, at least experimentally, but I haven't tried that. Installing the CUDA kernel driver Go to NVIDIA's CUDA Downloads page. Choose Linux > x86_64 > Ubuntu , and then whatever latest version they have (at the time of writing: Ubuntu 15.04). Choose the .run file option. Download the .run file (currently this one). Make sure not to put it in /tmp. Make the .run file executable: chmod a+x cuda_7.5.18_linux.run. Become root. Execute the .run file: Pretend to accept their silly shrink-wrap license; say "yes" to installing just the NVIDIA kernel driver, and say "no" to everything else. The installation should tell you it expects to have installed the NVIDIA kernel driver, but that you should reboot before continuing/retrying the toolkit installation. So... Having apparently succeeded, reboot. Installing CUDA itself Be root. Locate and execute cuda_7.5.18_linux.run This time around, say No to installing the driver, but Yes to installing everything else, and accept the default paths (or change them, whatever works for you). The installer is likely to now fail. That is a good thing assuming it's the kind of failure we expect: It should tell you your compiler version is not supported - CUDA 7.0 or 7.5 supports up to gcc 4.9 and you have some 5.x version by default. Now, if you get a message about missing libraries, that means my instructions above regarding prerequisites somehow failed, and you should comment here so I can fix them. Assuming you got the "good failure", proceed to: Re-invoke the .run file, this time with the --override option. Make the same choices as in step 11. CUDA should now be installed, by default under /usr/local/cuda (that's a symlink). But we're not done! Directing NVIDIA's nvcc compiler to use the right g++ version NVIDIA's CUDA compiler actually calls g++ as part of the linking process and/or to compile actual C++ rather than .cu files. I think. Anyway, it defaults to running whatever's in your path as g++; but if you place another g++ under /usr/local/cuda/bin, it will use that first! So... Execute symlink /usr/bin/g++-4.9 /usr/local/cuda/bin/g++ (and for good measure, maybe also symlink /usr/bin/gcc-4.9 /usr/local/cuda/bin/gcc. That's it. Trying out the installation cd /root/NVIDIA_CUDA-7.5_Samples/0_Simple/vectorAdd make The build should conclude successfully, and when you do ./vectorAdd you should get the following output: root@mymachine:~/NVIDIA_CUDA-7.5_Samples/0_Simple/vectorAdd# ./vectorAdd [Vector addition of 50000 elements] Copy input data from the host memory to the CUDA device CUDA kernel launch with 196 blocks of 256 threads Copy output data from the CUDA device to the host memory Test PASSED Done Notes You don't need to install the NVIDIA GDK (GPU Development Kit), but it doesn't hurt and it might be useful for some. Install it to the root directory of your system; it's pretty safe and there's an uninstaller afterwards: /usr/bin/uninstall_gdk.pl. In CUDA 8 it's already integrated into CUDA itself IIANM. Do not install additional packages with names like nvidia-... or cuda... ; they might not hurt but they'll certainly not help. Before doing any of these things, you might want to make sure your GPU is recognized at all, using lspci | grep -i nvidia.
How to install CUDA Toolkit 7/8/9 on Debian 8 (Jessie) or 9 (Stretch)?
1,336,906,003,000
How is it possible to control the fan speed of multiple consumer NVIDIA GPUs such as Titan and 1080 Ti on a headless node running Linux?
The following is a simple method that does not require scripting, connecting fake monitors, or fiddling and can be executed over SSH to control multiple NVIDIA GPUs' fans. It has been tested on Arch Linux. Create xorg.conf sudo nvidia-xconfig --allow-empty-initial-configuration --enable-all-gpus --cool-bits=7 This will create an /etc/X11/xorg.conf with an entry for each GPU, similar to the manual method. Note: Some distributions (Fedora, CentOS, Manjaro) have additional config files (eg in /etc/X11/xorg.conf.d/ or /usr/share/X11/xorg.conf.d/), which override xorg.conf and set AllowNVIDIAGPUScreens. This option is not compatible with this guide. The extra config files should be modified or deleted. The X11 log file shows which config files have been loaded. Alternative: Create xorg.conf manually Identify your cards' PCI IDs: nvidia-xconfig --query-gpu-info Find the PCI BusID fields. Note that these are not the same as the bus IDs reported in the kernel. Alternatively, do sudo startx, open /var/log/Xorg.0.log (or whatever location startX lists in its output under the line "Log file:"), and look for the line NVIDIA(0): Valid display device(s) on GPU-<GPU number> at PCI:<PCI ID>. Edit /etc/X11/xorg.conf Here is an example of xorg.conf for a three-GPU machine: Section "ServerLayout" Identifier "dual" Screen 0 "Screen0" Screen 1 "Screen1" RightOf "Screen0" Screen 1 "Screen2" RightOf "Screen1" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BusID "PCI:5:0:0" Option "Coolbits" "7" Option "AllowEmptyInitialConfiguration" EndSection Section "Device" Identifier "Device1" Driver "nvidia" VendorName "NVIDIA Corporation" BusID "PCI:6:0:0" Option "Coolbits" "7" Option "AllowEmptyInitialConfiguration" EndSection Section "Device" Identifier "Device2" Driver "nvidia" VendorName "NVIDIA Corporation" BusID "PCI:9:0:0" Option "Coolbits" "7" Option "AllowEmptyInitialConfiguration" EndSection Section "Screen" Identifier "Screen0" Device "Device0" EndSection Section "Screen" Identifier "Screen1" Device "Device1" EndSection Section "Screen" Identifier "Screen2" Device "Device2" EndSection The BusID must match the bus IDs we identified in the previous step. The option AllowEmptyInitialConfiguration allows X to start even if no monitor is connected. The option Coolbits allows fans to be controlled. It can also allow overclocking. Note: Some distributions (Fedora, CentOS, Manjaro) have additional config files (eg in /etc/X11/xorg.conf.d/ or /usr/share/X11/xorg.conf.d/), which override xorg.conf and set AllowNVIDIAGPUScreens. This option is not compatible with this guide. The extra config files should be modified or deleted. The X11 log file shows which config files have been loaded. Edit /root/.xinitrc nvidia-settings -q fans nvidia-settings -a [gpu:0]/GPUFanControlState=1 -a [fan:0]/GPUTargetFanSpeed=75 nvidia-settings -a [gpu:1]/GPUFanControlState=1 -a [fan:1]/GPUTargetFanSpeed=75 nvidia-settings -a [gpu:2]/GPUFanControlState=1 -a [fan:2]/GPUTargetFanSpeed=75 I use .xinitrc to execute nvidia-settings for convenience, although there's probably other ways. The first line will print out every GPU fan in the system. Here, I set the fans to 75%. Launch X sudo startx -- :0 You can execute this command from SSH. The output will be: Current version of pixman: 0.34.0 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.0.log", Time: Sat May 27 02:22:08 2017 (==) Using config file: "/etc/X11/xorg.conf" (==) Using system config directory "/usr/share/X11/xorg.conf.d" Attribute 'GPUFanControlState' (pushistik:0[gpu:0]) assigned value 1. Attribute 'GPUTargetFanSpeed' (pushistik:0[fan:0]) assigned value 75. Attribute 'GPUFanControlState' (pushistik:0[gpu:1]) assigned value 1. Attribute 'GPUTargetFanSpeed' (pushistik:0[fan:1]) assigned value 75. Attribute 'GPUFanControlState' (pushistik:0[gpu:2]) assigned value 1. Attribute 'GPUTargetFanSpeed' (pushistik:0[fan:2]) assigned value 75. Monitor temperatures and clock speeds nvidia-smi and nvtop can be used to observe temperatures and power draw. Lower temperatures will allow the card to clock higher and increase its power draw. You can use sudo nvidia-smi -pl 150 to limit power draw and keep the cards cool, or use sudo nvidia-smi -pl 300 to let them overclock. My 1080 Ti runs at 1480 MHz if given 150W, and over 1800 MHz if given 300W, but this depends on the workload. You can monitor their clock speed with nvidia-smi -q or more specifically, watch 'nvidia-smi -q | grep -E "Utilization| Graphics|Power Draw"' Returning to automatic fan management. Reboot. I haven't found another way to make the fans automatic.
How to adjust NVIDIA GPU fan speed on a headless node?
1,336,906,003,000
I'm working on a system with multiple NVIDIA GPUs. I would like disable / make-disappear one of my GPUs, but not the others; without rebooting; and so that I can later re-enable it. Is this possible? Notes: Assume I have root (though a non-root solution for users which have permissions for the device files is even better). In case it matters, the distribution is either SLES 12 or SLES 15, and - don't ask me why :-(
Disabling: The following disables a GPU, making it invisible, so that it's not on the list of CUDA devices you can find (and it doesn't even take up a device index) nvidia-smi -i 0000:xx:00.0 -pm 0 nvidia-smi drain -p 0000:xx:00.0 -m 1 where xx is the PCI device ID of your GPU. You can determine that using lspci | grep NVIDIA or nvidia-smi. The device will still be visible with lspci after running the commands above. Re-enabling: nvidia-smi drain -p 0000:xx:00.0 -m 0 the device should now be visible Problems with this approach This may fail to work if you are not root; or in some scenarios I can't yet characterize. Haven't yet checked what happens to procesess which are actively using the GPU as you do this. The syntax is baroque and confusing. NVIDIA - for shame, you need to make it simpler to disable GPUs.
How can I disable (and later re-enable) one of my NVIDIA GPUs?
1,336,906,003,000
I have these commands compiled and running but their contents are a bit of a mystery to me. The processes from intel-gpu-overlay read something like: 15R, 16B, 41ms waits. What is an R, what is a B, what does that wait time indicate? It has CPU: 152% (I'd guess this is the same as what I get from top). render: 32%, bitstream: 6%, blt: 6%. What kinds of code would cause these values to bottle neck and what would be the behavior of the system when they did? Here is a sample of intel-gpu-top: render busy: 23%: ████▋ render space: 12/16384 task percent busy GAM: 29%: █████▉ vert fetch: 1380772913 (5386667/sec) CS: 23%: ████▋ prim fetch: 350972637 (1368891/sec) GAFS: 9%: █▉ VS invocations: 1375586768 (5385212/sec) TSG: 8%: █▋ GS invocations: 0 (0/sec) VFE: 7%: █▌ GS prims: 0 (0/sec) SVG: 3%: ▋ CL invocations: 677098924 (2648400/sec) VS: 3%: ▋ CL prims: 682224019 (2663834/sec) URBM: 2%: ▌ PS invocations: 9708568482932 (34396218804/sec) VF: 2%: ▌ PS depth pass: 15549624948405 (58732230331/sec) SDE: 0%: CL: 0%: SF: 0%: TDG: 0%: RS: 0%: GAFM: 0%: SOL: 0%:
Taken from the link given in the comments in OP. I was curious as well, so here are just a few things I could grab from the reference manuals. Also of interest is the intel-gpu-tools source, and especially lib/instdone.c which describes what can appear in all Intel GPU models. This patch was also hugely helpful in translating all those acronyms! Some may be wrong, I'd love it if somebody more knowledgeable could chime in! I'll come back to update the answer with more as I learn this stuff. First, the three lines on the right : The render space is probably used by regular 3D operations. From googling, bitstream seems to be about audio decoding? This is quite a generic term, so hard to find with a query. It does not appear on my GPU though (Skylake HD 530), so it might not be everywhere. The blitter is described in vol. 11 and seems responsible for hardware acceleration of 2D operations (blitting). Fixed function (FF) pipeline units (old-school GPU features) : VF: Vertex Fetcher (vol. 1), the first FF unit in the 3D Pipeline responsible for fetching vertex data from memory. VS: Vertex Shader (vol.1), computes things on the vertices of each primitive drawn by the GPU. Pretty standard operation on GPUs. HS: Hull Shader TE: Tessellation Engine DS: Domain Shader GS: Geometry Shader SOL: Stream Output Logic CL: Clip Unit SF: Strips and Fans (vol.1), FF unit whose main function is to decompose primitive topologies such as strips and fans into primitives or objects. Units used for thread and pipeline management, for both FF units and GPGPU (see Intel Open Source HD Graphics Programmers Manual for a lot of info on how this all works) : CS: Command Streamer (vol.1), functional unit of the Graphics Processing Engine that fetches commands, parses them, and routes them to the appropriate pipeline. TDG: Thread Dispatcher VFE: Video Front-End TSG: Thread Spawner URBM: Unified Return Buffer Manager Other stuff : GAM: see GFX Page Walker (vol. 5), also called Memory Arbiter, has to do with how the GPU keeps track of its memory pages, seems quite similar to what the TLB (see also SLAT) does for your RAM. SDE: South Display Engine ; according to vol. 12, "the South Display Engine supports Hot Plug Detection, GPIO, GMBUS, Panel Power Sequencing, and Backlight Modulation". Credits: StackOverflow User F.X.
How do I interpret the output of intel-gpu-top and intel-gpu-overlay?
1,336,906,003,000
I have seen in forums and manuals that you have to add Option "Coolbits" "value" to xorg.conf or similar files. I have been able to get this working for the first GPU, the one rendering the display. I have not been able to get overclocking options in nvidia-settings for the second GPU, not rendering any display. I have tried things like Section "Device" Identifier "Videocard0" Driver "nvidia" BusID "PCI:2:00:0" Option "Coolbits" "12" EndSection Section "Device" Identifier "Videocard1" Driver "nvidia" BusID "PCI:3:00:0" Option "Coolbits" "12" EndSection in the various files: xorg.conf, 99-nvidia.conf, nvidia-xorg.conf. Everything I have tried has led to black screens, no overclocking capability or overclocking capability on the first GPU only. Is it possible to unlock overclocking for both GPUs, if so how? I have not found this question asked anywhere. I am running 346.59 drivers on Fedora 21.
Changing the xorg.conf file to add virtual X servers for each of the cards (even those not connected to a monitor) solved the issue. Basically, you want to have a server layout section with all of your real and virtual screens: Section "ServerLayout" Identifier "Layout0" # Our real monitor Screen 0 "Screen0" 0 0 # Our virtual monitors Screen 1 "Screen1" Screen 2 "Screen2" # .... Screen 3 "Screen3" InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" EndSection Then, for each your cards, you can put in (almost) identical "Monitor", "Screen" and "Display" sections, differing only by their identifiers, which in the following are N, but should be repaced by the card number, 0,1, etc. Note that at least the parameters for the real monitor should correspond to what you currently have in your xorg.conf file, i.e. in the following I have CRT since it's an old VGA monitor. Section "Screen" Identifier "ScreenN" Device "DeviceN" Monitor "MonitorN" DefaultDepth 24 Option "ConnectedMonitor" "CRT" Option "Coolbits" "5" Option "TwinView" "0" Option "Stereo" "0" Option "metamodes" "nvidia-auto-select +0+0" SubSection "Display" Depth 24 EndSubSection EndSection Section "Monitor" Identifier "MonitorN" VendorName "Unknown" ModelName "CRT-N" HorizSync 28.0 - 33.0 VertRefresh 43.0 - 72.0 Option "DPMS" EndSection Section "Device" Identifier "DeviceN" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "Your Card name here" BusID "PCI:X:Y:Z" EndSection
Multi Nvidia GPU overclocking for computations (CUDA)
1,336,906,003,000
I am using ArchLinux on an HP Pavilion dv9000t which has overheating problems. I did all what I can do to get a better air flow in the laptop and put a better thermal paste but there is still a problem: the fan stops spinning when the CPU temperature is low (even if the GPU temperature is high, which is problematic). I found out I can get the fan running by launching some heavy processing commands (like the yes command). However, it is not a solution because I need to stop this command when the CPU gets too hot and launch it again when the fan stops (so that the GPU does not get hot). I tried to control the fan using this wiki, but when I run pwmconfig, I get this error: /usr/bin/pwmconfig: There are no pwm-capable sensor modules installed Do you know what can I do to get the fan always spinning? Edit: The sensors-dectect output is the following: ~/ sudo sensors-detect # sensors-detect revision 6170 (2013-05-20 21:25:22 +0200) # System: Hewlett-Packard HP Pavilion dv9700 Notebook PC [Rev 1] (laptop) # Board: Quanta 30CB This program will help you determine which kernel modules you need to load to use lm_sensors most effectively. It is generally safe and recommended to accept the default answers to all questions, unless you know what you're doing. Some south bridges, CPUs or memory controllers contain embedded sensors. Do you want to scan for them? This is totally safe. (YES/no): Module cpuid loaded successfully. Silicon Integrated Systems SIS5595... No VIA VT82C686 Integrated Sensors... No VIA VT8231 Integrated Sensors... No AMD K8 thermal sensors... No AMD Family 10h thermal sensors... No AMD Family 11h thermal sensors... No AMD Family 12h and 14h thermal sensors... No AMD Family 15h thermal sensors... No AMD Family 15h power sensors... No AMD Family 16h power sensors... No Intel digital thermal sensor... Success! (driver `coretemp') Intel AMB FB-DIMM thermal sensor... No VIA C7 thermal sensor... No VIA Nano thermal sensor... No Some Super I/O chips contain embedded sensors. We have to write to standard I/O ports to probe them. This is usually safe. Do you want to scan for Super I/O sensors? (YES/no): Probing for Super-I/O at 0x2e/0x2f Trying family `National Semiconductor/ITE'... No Trying family `SMSC'... No Trying family `VIA/Winbond/Nuvoton/Fintek'... No Trying family `ITE'... No Probing for Super-I/O at 0x4e/0x4f Trying family `National Semiconductor/ITE'... No Trying family `SMSC'... No Trying family `VIA/Winbond/Nuvoton/Fintek'... No Trying family `ITE'... No Some hardware monitoring chips are accessible through the ISA I/O ports. We have to write to arbitrary I/O ports to probe them. This is usually safe though. Yes, you do have ISA I/O ports even if you do not have any ISA slots! Do you want to scan the ISA I/O ports? (YES/no): Probing for `National Semiconductor LM78' at 0x290... No Probing for `National Semiconductor LM79' at 0x290... No Probing for `Winbond W83781D' at 0x290... No Probing for `Winbond W83782D' at 0x290... No Lastly, we can probe the I2C/SMBus adapters for connected hardware monitoring devices. This is the most risky part, and while it works reasonably well on most systems, it has been reported to cause trouble on some systems. Do you want to probe the I2C/SMBus adapters now? (YES/no): Using driver `i2c-i801' for device 0000:00:1f.3: Intel 82801H ICH8 Module i2c-dev loaded successfully. Next adapter: nouveau-0000:01:00.0-0 (i2c-0) Do you want to scan it? (yes/NO/selectively): Next adapter: nouveau-0000:01:00.0-1 (i2c-1) Do you want to scan it? (yes/NO/selectively): Next adapter: nouveau-0000:01:00.0-2 (i2c-2) Do you want to scan it? (yes/NO/selectively): Now follows a summary of the probes I have just done. Just press ENTER to continue: Driver `coretemp': * Chip `Intel digital thermal sensor' (confidence: 9) Do you want to overwrite /etc/conf.d/lm_sensors? (YES/no): Unloading i2c-dev... OK Unloading cpuid... OK The file /etc/conf.d/lm_sensors contains: HWMON_MODULES="coretemp" And the file /etc/modules-load.d/lm_sensors.conf contains: coretemp acpi-cpufreq The command sensors outputs this: ~/ sensors coretemp-isa-0000 Adapter: ISA adapter Core 0: +46.0°C (high = +85.0°C, crit = +85.0°C) Core 1: +47.0°C (high = +85.0°C, crit = +85.0°C) acpitz-virtual-0 Adapter: Virtual device temp1: +49.0°C nouveau-pci-0100 Adapter: PCI adapter temp1: +60.0°C (high = +95.0°C, hyst = +3.0°C) (crit = +115.0°C, hyst = +5.0°C) (emerg = +115.0°C, hyst = +5.0°C)
I finally decided to choose a hardware solution. I cut two wires from the fan and now the fan always spin (at the max level though). I found this solution in this blog post.
How to force the fan to always spin?
1,336,906,003,000
I wonder how to log GPU load. I use Nvidia graphic cards with CUDA. Not a duplicate: I want to log.
It's all there. You just didn't read carefuly :) Use the following python script which uses an optional delay and repeat like iostat and vmstat: https://gist.github.com/matpalm/9c0c7c6a6f3681a0d39d You can also use nvidia-settings: nvidia-settings -q GPUUtilization -q useddedicatedgpumemory ...and wrap it up with some simple bash loop or setup a cron job or just use watch: watch -n0.1 "nvidia-settings -q GPUUtilization -q useddedicatedgpumemory"'
How to log GPU load? [duplicate]
1,336,906,003,000
I have installed latest nVidia graphic driver via this PPA "xorg-edgers/ppa". Now in Nvidia X server setting showing the driver version is 346.35. But in Ubuntu's Additional Drivers there is no such driver rather it marks the Nouveau driver. I ran lspci -vnn | grep -i VGA -A 12. 01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GF104 [GeForce GTX 460] [10de:0e22] (rev a1) (prog-if 00 [VGA controller]) Subsystem: Gigabyte Technology Co., Ltd Device [1458:34fc] Flags: bus master, fast devsel, latency 0, IRQ 53 Memory at fc000000 (32-bit, non-prefetchable) [size=32M] Memory at d8000000 (64-bit, prefetchable) [size=128M] Memory at d4000000 (64-bit, prefetchable) [size=64M] I/O ports at b800 [size=128] [virtual] Expansion ROM at fe780000 [disabled] [size=512K] Capabilities: <access denied> Kernel driver in use: nvidia 01:00.1 Audio device [0403]: NVIDIA Corporation GF104 High Definition Audio Controller [10de:0beb] (rev a1) Subsystem: Gigabyte Technology Co., Ltd Device [1458:34fc] Which version of graphic driver I am using currently ? If I am not using nVidia's driver then how can I use nVidia's driver. I am running Ubuntu 14.04 64bit OS.
Update typing: nvidia-smi gives you the driver verson nowadays. Old answer: Typing nvidia-settings --version will tell you what version of the NVidia driver is currently installed (even when it's not running). lsmod | grep video will show you the running video module. modinfo szWhateverWasTheOutputOfThePreviousCommand will give you the version of the running module.
What is the GPU driver I am currently running?
1,336,906,003,000
I use the CUDA toolkit to perform some computations on my Nvidia GPUs. How to kill all processes that use a given GPU? (killing at once, i.e. without having to manually type the PIDs behind kill -9.) E.g. killing all processes using GPU 2:
Following the Unix philosophy, you have a tool that lists processes using a given GPU, and a tool that kills processes. Combine them using shell constructs and text processing tools. For example, to kill all the processes using GPU 2, you can execute the following command: kill $(nvidia-smi | awk '$2=="Processes:" {p=1} p && $2 == 2 && $3 > 0 {print $3}') or kill $(nvidia-smi -g 2 | awk '$2=="Processes:" {p=1} p && $3 > 0 {print $3}')
How to kill all processes using a given GPU?
1,336,906,003,000
How can I tell what OpenGL versions are supported on my (Arch Linux) machine?
$ sudo pacman -S mesa-demos $ glxinfo | grep "OpenGL version"
How can I tell what version of OpenGL my machine supports on Arch Linux?
1,336,906,003,000
How can I turn off Hardware Acceleration in Linux, also known as Direct Rendering. I wish to turn this off, as it messes with some applications like OBS Studio which can't handle capturing of hardware acceleration on other applications since it's enabled for the entire system. Certain apps can turn it on and off, but can't do this for desktop and other apps. When adding a source to capture from in OBS it just shows a blank capture image, for example if I wanted to record my desktop, it'll just show it as a blank capture input. Doesn't work if I want to capture web browser like Google Chrome, unless it's a single window with no tabs, and hardware acceleration is turned off in it's settings. Graphics: Card-1: Intel 3rd Gen Core processor Graphics Controller bus-ID: 00:02.0 Card-2: NVIDIA GF108M [GeForce GT 630M] bus-ID: 01:00.0 Display Server: X.Org 1.15.1 driver: nvidia Resolution: [email protected] GLX Renderer: GeForce GT 630M/PCIe/SSE2 GLX Version: 4.5.0 NVIDIA 384.90 Direct Rendering: Yes
You can configure Xorg to disable OpenGL / GLX. For a first try, you can run a second X session: switch to tty2, log in and type: startx -- :2 vt2 -extension GLX To permanently disable hardware acceleration, create a file: /etc/X11/xorg.conf.d/disable-gpu.conf with the the content: Section "Extensions" Option "GLX" "Disable" EndSection Note that Xwayland in Wayland compositors like Gnome3-Wayland will ignore settings in xorg.conf.d.
How to disable Hardware Acceleration in Linux?
1,336,906,003,000
What distinguishes kitty from the vast majority of terminal emulators? It offers GPU-acceleration combined with a wide feature set. It’s targeted at power keyboard users. It’s billed as a modern, hackable, featureful, OpenGL based terminal emulator. What are the advantages of hardware-accelerated terminal emulators? Is it speed? How you notice that in daily command execution? Classic terminals seem not too slow, the bottleneck is mostly the human typing.
They can potentially be faster at outputting and refreshing vast amounts of information. It could also allow for smooth(er) scrolling. Human beings however are quite slow at reading this information, so I'm kinda doubtful this can be beneficial - the average person is unlikely to be able to comprehend it anyways. CPU usage could be lower but it needs to be tested. At the same time such terminals are eating your VRAM which could be an issue for users who have little VRAM or VRAM stored in RAM (users with integrated graphics). One way to measure performance is to generate or find a very large text file and measure the time and CPU time it takes to output it. $ cat bigfile > /dev/null # cache it $ time cat bigfile Could be enough. I don't have terminals with HW acceleration, so I cannot test it. I've just installed kitty. XFCE4 Terminal: real 0m1.760s user 0m0.000s sys 0m0.342s Kitty: real 0m1.007s user 0m0.000s sys 0m0.282s That's for a 41MB file with over 700K lines: $ wc -l test.txt 751900 test.txt In both cases you barely can see or read anything on the screen.
What are the advantages of hardware-accelerated terminal emulators?
1,336,906,003,000
How does X11 forwarding work? I am interested in knowing whether the processing to render graphics is done at the end of the host running the application or the host displaying the graphical interface? Should I use a GPU intensive application (game) - where should I have the GPU installed (server end / client end)? Of course the server will need the GPU if it is running CUDA / openCL applications, but what about the display? This question was closed on StackOverflow. This was a link pointed to me, but I am looking to understand more of the underlying protocol and performance tweaks.
Assuming you're using OpenGL, the GPU should be installed on the host where the X server is running. The client will send rendering commands to the X server, which will then take advantage of the GPU to process the rendering commands.
How does X11 forwarding work?
1,336,906,003,000
I'm trying to understand what the difference is between DRM (Direct Rendering Manager) and a graphics driver, such as AMD or Nvidia GPU drivers. Reading the DRM wiki[1], it seems to me like DRM is basically a graphics hardware driver, however this doesn't explain the existence of proprietary or FOSS graphics drivers for discrete GPUs. What then, is the difference, or use case, for DRM over mesa or Nvidia drivers? What happens with DRM when AMD drivers are installed? Are they used for different tasks? Are proprietary drivers built around DRM? [1]https://en.wikipedia.org/wiki/Direct_Rendering_Manager
"Graphics driver" can mean any number of things. The way X (the graphical windowing system) works is that there is a central X server, which can load modules ("X drivers") for different hardware. Like vesa, fbdev, nvidia, nouveau, amdgpu. Some of these drivers can work on their own (vesa). Some need linux kernel drivers. Many of these kernel drivers following the "direct rendering manager API", and therefore they are called "DRM drivers". Others, like the proprietary nvidia driver (which needs both an X driver and a kernel driver), don't. It gets more complicated: The hardware consists of parts that read out the framebuffer and display it at different resolutions etc. This is called "modesetting". Modern graphics card also have a GPU, which is used to accelerate 3D drawing (OpenGL). "DRM kernel drivers" provide an interface for both. "Mesa" is a software library that understands OpenGL, but does the rendering either on the CPU, or on some (but not all) GPUs (see here for a list). So the Mesa library can offer this functionality for graphics card that do not or do not sufficiently have hardware for this, or can serve as the OpenGL library for a few GPUs. You could probably make a case to call anything in this complex picture a "graphics driver".
What's the difference between DRM and a graphics driver?
1,336,906,003,000
I'm about to buy a new laptop that is being used with Linux only. Unfortunately finding a Linux laptop is not simple at all, and it seems the only option I found includes a nvidia Quadro M1200 and an Intel HD 630. I know that it is very complex/impossible to properly run wayland (Ubuntu for instance) on nvidia. Actually I don't care in any way about the nvidia GPU, the Intel GPU should be more than sufficient. But is it possible to completely disable the nvidia GPU to let wayland run properly on the Intel GPU? I read about nvidia prime: can I use it like this? Can I completely disable nvidia and just forget about it, like it was not even there?
The answer was simple: just install nvidia drivers, open the nvidia settings page and set to use the Intel HD GPU only. Login again and you are done. Works perfectly. Battery lasts much much longer and wayland works properly. As soon as the nvidia GPU is enabled, it seems that the fan turns on immediately, and keeps running even when idle. That is probably a large part of battery consumption. I'm wondering if that is reasonable or not: is that fan really always needed? NOTE: I recently discovered that what I described is a Ubuntu specific patch applied to the nvidia config app. Other distros may not include it entirely. Manjaro, for instance, is not including it in any way. It is probably possible to setup manually, but I didn't succeed. NOTE2: blacklisting nvidia and nouveau is sufficient to run on Intel only. Not sure how to run on nVidia only.
Is it possible to completely turn off nvidia GPU to be able to run wayland?
1,336,906,003,000
I'm using fedora 28 because it seems that is the only distro where that detects the rx560x correctly. Nonethe ess, I noticed that the performance when using the discrete GPU is significantly worse than using the integrated one. My machine configuration is: ACER nitro 5 an515-42, 8gb ram, APU ryzen 2500u with vega 8 integraded graphics card, RX 560X AMD discrete graphics card. This is the output of the lspci command 00:00.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 15d0 00:00.2 IOMMU: Advanced Micro Devices, Inc. [AMD] Device 15d1 00:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge 00:01.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 15d3 00:01.6 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 15d3 00:01.7 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 15d3 00:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge 00:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 15db 00:08.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 15dc 00:14.0 SMBus: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller (rev 61) 00:14.3 ISA bridge: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge (rev 51) 00:18.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 15e8 00:18.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 15e9 00:18.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 15ea 00:18.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 15eb 00:18.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 15ec 00:18.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 15ed 00:18.6 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 15ee 00:18.7 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 15ef 01:00.0 Display controller: Advanced Micro Devices, Inc. [AMD/ATI] Baffin [Radeon RX 460/560D / Pro 450/455/460/555/555X/560/560X] (rev c0) 02:00.0 Unassigned class [ff00]: Realtek Semiconductor Co., Ltd. RTL8411B PCI Express Card Reader (rev 01) 02:00.1 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 12) 03:00.0 Network controller: Qualcomm Atheros QCA6174 802.11ac Wireless Network Adapter (rev 32) 04:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Raven Ridge [Radeon Vega Series / Radeon Vega Mobile Series] (rev c4) 04:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Device 15de 04:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Device 15df 04:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Device 15e0 04:00.4 USB controller: Advanced Micro Devices, Inc. [AMD] Device 15e1 04:00.6 Audio device: Advanced Micro Devices, Inc. [AMD] Device 15e3 05:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 61) As can be seen by the xrandr --listdevices command, both cards are recognized Providers: number : 2 Provider 0: id: 0x7a cap: 0xf, Source Output, Sink Output, Source Offload, Sink Offload crtcs: 4 outputs: 2 associated providers: 1 name:modesetting Provider 1: id: 0x44 cap: 0x5, Source Output, Source Offload crtcs: 5 outputs: 0 associated providers: 1 name:modesetting But when running applications using the discrete GPU the performance is slower. One example is running DRI_PRIME=0 glmark2 That one uses the integraded GPU and I obtained a performance of around 2000 fps, but running: DRI_PRIME=1 glmark2 Gives me a performance of around 500 fps, that should not be the case becase it is the discrete GPU. This can be confirmed by the following outputs: $ DRI_PRIME=0 glxinfo | grep "OpenGL renderer" OpenGL renderer string: AMD RAVEN (DRM 3.26.0 / 4.18.7- 200.fc28.x86_64, LLVM 6.0.1) $ DRI_PRIME=1 glxinfo | grep "OpenGL renderer" OpenGL renderer string: AMD Radeon (TM) RX Graphics (POLARIS11 / DRM 3.26.0 / 4.18.7-200.fc28.x86_64, LLVM 6.0.1) I also attach the ouput of the command dmesg | grep amdgpu [ 2.967193] [drm] amdgpu kernel modesetting enabled. [ 2.983335] amdgpu 0000:01:00.0: enabling device (0002 -> 0003) [ 3.049189] amdgpu 0000:01:00.0: VRAM: 4096M 0x000000F400000000 - 0x000000F4FFFFFFFF (4096M used) [ 3.049190] amdgpu 0000:01:00.0: GTT: 256M 0x0000000000000000 - 0x000000000FFFFFFF [ 3.049332] [drm] amdgpu: 4096M of VRAM memory ready [ 3.049333] [drm] amdgpu: 4096M of GTT memory ready. [ 3.140025] [drm:dc_create [amdgpu]] *ERROR* DC: Number of connectors is zero! [ 3.277785] [drm] Initialized amdgpu 3.26.0 20150101 for 0000:01:00.0 on minor 0 [ 3.278060] fb: switching to amdgpudrmfb from EFI VGA [ 3.278487] amdgpu 0000:04:00.0: VRAM: 1024M 0x000000F400000000 - 0x000000F43FFFFFFF (1024M used) [ 3.278489] amdgpu 0000:04:00.0: GTT: 1024M 0x000000F500000000 - 0x000000F53FFFFFFF [ 3.278533] [drm] amdgpu: 1024M of VRAM memory ready [ 3.278534] [drm] amdgpu: 3072M of GTT memory ready. [ 3.468197] amdgpu: [powerplay] dpm has been enabled [ 3.468480] [drm:construct [amdgpu]] *ERROR* construct: Invalid Connector ObjectID from Adapter Service for connector index:2! type 0 expected 3 [ 3.468545] [drm:construct [amdgpu]] *ERROR* construct: Invalid Connector ObjectID from Adapter Service for connector index:3! type 0 expected 3 [ 3.525127] fbcon: amdgpudrmfb (fb0) is primary device [ 3.566614] amdgpu 0000:04:00.0: fb0: amdgpudrmfb frame buffer device [ 3.572124] amdgpu 0000:04:00.0: ring 0(gfx) uses VM inv eng 4 on hub 0 [ 3.572125] amdgpu 0000:04:00.0: ring 1(comp_1.0.0) uses VM inv eng 5 on hub 0 [ 3.572127] amdgpu 0000:04:00.0: ring 2(comp_1.1.0) uses VM inv eng 6 on hub 0 [ 3.572128] amdgpu 0000:04:00.0: ring 3(comp_1.2.0) uses VM inv eng 7 on hub 0 [ 3.572129] amdgpu 0000:04:00.0: ring 4(comp_1.3.0) uses VM inv eng 8 on hub 0 [ 3.572131] amdgpu 0000:04:00.0: ring 5(comp_1.0.1) uses VM inv eng 9 on hub 0 [ 3.572132] amdgpu 0000:04:00.0: ring 6(comp_1.1.1) uses VM inv eng 10 on hub 0 [ 3.572133] amdgpu 0000:04:00.0: ring 7(comp_1.2.1) uses VM inv eng 11 on hub 0 [ 3.572134] amdgpu 0000:04:00.0: ring 8(comp_1.3.1) uses VM inv eng 12 on hub 0 [ 3.572136] amdgpu 0000:04:00.0: ring 9(kiq_2.1.0) uses VM inv eng 13 on hub 0 [ 3.572137] amdgpu 0000:04:00.0: ring 10(sdma0) uses VM inv eng 4 on hub 1 [ 3.572138] amdgpu 0000:04:00.0: ring 11(vcn_dec) uses VM inv eng 5 on hub 1 [ 3.572140] amdgpu 0000:04:00.0: ring 12(vcn_enc0) uses VM inv eng 6 on hub 1 [ 3.572141] amdgpu 0000:04:00.0: ring 13(vcn_enc1) uses VM inv eng 7 on hub 1 [ 3.579107] [drm] Initialized amdgpu 3.26.0 20150101 for 0000:04:00.0 on minor 1 [ 5.202665] audit: type=1130 audit(1537458205.475:67): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-backlight@backlight:amdgpu_bl1 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' [ 15.013231] amdgpu 0000:01:00.0: GPU pci config reset [ 19.678936] amdgpu: [powerplay] dpm has been enabled [ 26.341565] amdgpu 0000:01:00.0: GPU pci config reset [ 59.533544] amdgpu: [powerplay] dpm has been enabled [ 65.786402] amdgpu 0000:01:00.0: GPU pci config reset [ 167.130594] [drm:generic_reg_wait [amdgpu]] *ERROR* REG_WAIT timeout 1us * 10 tries - optc1_lock line:628 [ 167.130677] WARNING: CPU: 3 PID: 1899 at drivers/gpu/drm/amd/amdgpu/../display/dc/dc_helper.c:254 generic_reg_wait+0xe7/0x160 [amdgpu] [ 167.130704] snd_hda_codec irqbypass videobuf2_memops crct10dif_pclmul btusb videobuf2_v4l2 snd_hda_core btrtl crc32_pclmul btbcm videobuf2_common btintel ath snd_hwdep bluetooth snd_seq videodev ghash_clmulni_intel snd_seq_device cfg80211 joydev snd_pcm media ecdh_generic sp5100_tco snd_timer k10temp i2c_piix4 rtsx_pci_ms rfkill snd memstick soundcore wmi video pinctrl_amd acer_wireless pcc_cpufreq acpi_cpufreq amdkfd amd_iommu_v2 amdgpu chash i2c_algo_bit gpu_sched drm_kms_helper rtsx_pci_sdmmc mmc_core ttm crc32c_intel serio_raw drm r8169 rtsx_pci mii i2c_hid [ 167.130774] RIP: 0010:generic_reg_wait+0xe7/0x160 [amdgpu] [ 167.130850] optc1_lock+0xa0/0xb0 [amdgpu] [ 167.130897] dcn10_pipe_control_lock.part.28+0x4a/0x70 [amdgpu] [ 167.130944] dcn10_apply_ctx_for_surface+0xee/0x1210 [amdgpu] [ 167.130994] ? hubbub1_verify_allow_pstate_change_high+0xdd/0x180 [amdgpu] [ 167.131040] ? dcn10_verify_allow_pstate_change_high+0x1d/0x240 [amdgpu] [ 167.131085] ? dcn10_set_bandwidth+0x275/0x2d0 [amdgpu] [ 167.131129] dc_commit_state+0x269/0x580 [amdgpu] [ 167.131171] ? set_freesync_on_streams.part.6+0x4d/0x250 [amdgpu] [ 167.131213] ? mod_freesync_set_user_enable+0x11f/0x150 [amdgpu] [ 167.131260] amdgpu_dm_atomic_commit_tail+0x37c/0xd70 [amdgpu] [ 167.131298] ? amdgpu_bo_pin_restricted+0xd6/0x300 [amdgpu] [ 167.131441] amdgpu_drm_ioctl+0x49/0x80 [amdgpu] [ 167.493997] [drm:hwss_edp_wait_for_hpd_ready [amdgpu]] *ERROR* hwss_edp_wait_for_hpd_ready: wait timed out! [ 992.949753] amdgpu: [powerplay] dpm has been enabled [ 999.815690] amdgpu 0000:01:00.0: GPU pci config reset [ 1097.603835] amdgpu: [powerplay] dpm has been enabled [ 1105.675220] amdgpu 0000:01:00.0: GPU pci config reset [ 1138.629401] amdgpu: [powerplay] dpm has been enabled [ 1157.490886] amdgpu 0000:01:00.0: GPU pci config reset [ 1167.629670] amdgpu: [powerplay] dpm has been enabled [ 1188.641935] amdgpu 0000:01:00.0: GPU pci config reset [ 1237.003085] amdgpu: [powerplay] dpm has been enabled [ 1243.811986] amdgpu 0000:01:00.0: GPU pci config reset [ 1287.684082] amdgpu: [powerplay] dpm has been enabled [ 1332.291087] amdgpu 0000:01:00.0: GPU pci config reset [ 1511.066683] amdgpu: [powerplay] dpm has been enabled [ 1520.167710] amdgpu 0000:01:00.0: GPU pci config reset [ 1605.023196] amdgpu: [powerplay] dpm has been enabled [ 1611.134239] amdgpu 0000:01:00.0: GPU pci config reset [ 1930.739920] amdgpu: [powerplay] dpm has been enabled [ 2105.711880] amdgpu 0000:01:00.0: GPU pci config reset [ 2440.536121] amdgpu: [powerplay] dpm has been enabled [ 2447.470108] amdgpu 0000:01:00.0: GPU pci config reset [ 2453.329765] amdgpu: [powerplay] dpm has been enabled [ 2489.324912] amdgpu 0000:01:00.0: GPU pci config reset [ 3732.604342] amdgpu: [powerplay] dpm has been enabled [ 3744.199117] amdgpu 0000:01:00.0: GPU pci config reset [ 3747.717927] amdgpu: [powerplay] dpm has been enabled [ 3900.960469] amdgpu 0000:01:00.0: GPU pci config reset [ 4876.333471] amdgpu: [powerplay] dpm has been enabled [ 4883.356467] amdgpu 0000:01:00.0: GPU pci config reset [ 5597.531838] amdgpu: [powerplay] dpm has been enabled [ 5604.423794] amdgpu 0000:01:00.0: GPU pci config reset [ 5606.860546] amdgpu: [powerplay] dpm has been enabled [ 5613.824859] amdgpu 0000:01:00.0: GPU pci config reset [ 5619.323471] amdgpu: [powerplay] dpm has been enabled [ 5625.588060] amdgpu 0000:01:00.0: GPU pci config reset Any help regarding this will be highly appreciated! EDIT: In any new distro that has a kernel version 5.0 and above it has been working, but with a tweak. For example, if I want to launch DOOM 2016 and use the rx560x card, I need to launch Steam from console in the following way: DRI_PRIME=1 steam The DRI_PRIME=1 command tells steam to use the discrete GPU, so when I run DOOM it uses it!
I regard the same issue as you on the Arch Linux (mesa-git, llvm-svn, linux 4.18.12 (even running 4.19.x or mainline 4.20rcx)). Stable linux kernel isn't, I would say, optimized for our gpu yet. Luckily, there is a temporary solution. I tried linux-amd-wip-git (drm-next-4.21-wip) which has the latest amd patches (there are also other kernels which have them like linux-drm-fixes-git (drm-fixes), depends on you which you pick). Many games are unplayable on the stable kernel and most of them run better on the integrated gpu than on the discrete one as you are also saying. linux-amd-wip-git makes perform discrete gpu better than integrated one and I also get e.g. ~30 fps gain on the World Of Warcraft via wine and gallium nine/dxvk. Some games perform even better, every I tried do. We just have to wait for merge of these patches to the stable linux. At the end, at least we know it will be fixed. EDIT: I just tested the latest 5.0rc7, it works like aforementioned kernels. Seems it will be fixed in the 5.0.
rx 560x slower than integrated vega gpu on fedora 28
1,336,906,003,000
I believe we are running into a possible bug with the GTX 1080 (driver) and PCI Passthrough. My host is an Ubuntu 14.04 system. My guest is an Ubuntu 14.04/16.04 system (both do the same thing). I can see device inside the guest VM: $ lspci -vnn | grep VGA 00:05.0 VGA compatible controller: NVIDIA Corporation Device 1b80 (rev a1) I was able to successfully install the driver (370.2, latest driver). It installs, but is not recognized by nvidia-smi: $ nvidia-smi Unable to determine the device handle for GPU 0000:00:05.0: Unknown Error Looking in dmesg I see the following error message [29.535583] nvidia 0000:00:05.0: irq 45 for MSI/MSI-X [29.577727] NVRM: RmInitAdapter failed! (0x23:0x56:458) [29.577807] NVRM: rm_init_adapter failed for device bearing minor number 0 I can switch out the GTX 1080 for a different card (M4000, do passthrough and install drivers on guest) and it works. I am going to try tomorrow with another Geforce card. Another person on the NVidia forums had the SAME exact issue as me (but no answer). Is there anyway to debug this further?
I had the same problem, I found the answer at https://www.evonide.com/non-root-gpu-passthrough-setup/. You need to add -cpu host,kvm=off to the qemu command line. I'm using ganeti, so the following fixed the problem: gnt-instance modify -H cpu_type="host\,kvm=off" If I understand correctly this flag does not switch off KVM acceleration for the guest, that's switched on with -machine pc,accel=kvm. But it switches off nested KVM acceleration for the guest (so you won't be able to run a KVM virtual machine inside the guest).
Driver for GTX 1080 doesn't work on guest when using KVM PCI Passthrough
1,336,906,003,000
I was wondering if Unix systems use the GPU for the startup splash/loading screen because I've been having some trouble with an overheating Mac with graphics issues. Unix-type systems (such as MacOS 10.6, 10.10 and different versions of Ubuntu) show the splash screen, but never actually boot into the GUI (typically just a plain black/blue/white screen after the startup splash). Windows, however, starts up (I assume this is what's happening as I can hear hard drive activity) and only shows a black screen (no splash or loading screen). This just made me curious as I have a cursed 2008 ATI iMac. I plan later to try reapplying thermal paste to see if that does any good, and then try a reflow (I know this'll only be a very temporary solution but I just want to see if anything will work), but if all else fails, it'll probably go into the bin.
The answer to your literal question is yes: all systems use the GPU to display startup messages and splash screens. That's because going through the GPU is the only way to display something on the monitor. However, the answer to the question you meant to ask is no: the ways the GPU is used during startup and after the system has fully booted are different. During startup, the operating system uses the GPU in text mode or as a simple framebuffer. These involve very little work from the GPU, so they are unlikely to trigger GPU bugs or to make it overheat. Text mode is limited in that it can only show text in a single monospace font. Framebuffer mode can show arbitrary images, but it's slow. Both modes may use a resolution that's less than the maximum that the GPU and monitor can do. Once the system has fully booted, it likely starts using the GPU in a different ways, using its computational capabilities. This involves a complex driver in the operating system, and may involve some nontrivial computation on the GPU. Under Linux, this mode is part of the X window system (or a replacement for it such as Wayland. You may be able to get a GUI on Linux with the X.org fbdev driver (which uses the GPU as a simple framebuffer) or with the X.org VESA driver (which is a very old standard that does little more than a framebuffer and has limited resolution). It won't be fast, and it might not be pretty, but it's better than nothing. You may need to work in text mode first to prevent X from starting in a mode that doesn't work. The way to do this depends on the distribution. The Arch Wiki may be useful even if you don't use Arch. Once you've logged in as root, create or edit /etc/X11/xorg.conf to choose the video driver. For example, for fbdev, you need something like this (untested): Section "Device" Identifier "fbdev" Driver "fbdev" Option "fbdev" "/dev/fb0" EndSection You also need to install the appropriate driver (if it isn't already present), which again is distribution-dependent. For example, on Debian/Ubuntu, that's apt-get install xserver-xorg-video-fbdev
Do Unix systems and other similar systems use the GPU for the startup splash/loading screen (when there is one)?
1,336,906,003,000
My nvidia-smi output is as follows COVID19_002_6LU7_Protease_Top_3/ni_fda130/fda130_fix$ nvidia-smi Sun Jun 7 15:00:30 2020 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 440.33.01 Driver Version: 440.33.01 CUDA Version: 10.2 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 Quadro K620 On | 00000000:02:00.0 On | N/A | | 63% 73C P0 19W / 30W | 1253MiB / 1994MiB | 98% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 1406 G /usr/lib/xorg/Xorg 12MiB | | 0 2006 G /usr/lib/xorg/Xorg 193MiB | | 0 2186 G /usr/bin/gnome-shell 370MiB | | 0 3007 G ...AAAAAAAAAAAACAAAAAAAAAA= --shared-files 400MiB | | 0 9680 G /opt/teamviewer/tv_bin/TeamViewer 10MiB | | 0 14270 G /usr/lib/rstudio/bin/rstudio 56MiB | | 0 14961 G /usr/lib/rstudio/bin/rstudio 61MiB | | 0 22725 G ...passed-by-fd --v8-snapshot-passed-by-fd 4MiB | | 0 23617 C gmx 74MiB | +-----------------------------------------------------------------------------+ gmx is molecular dynamics simulation and is my primary process. I am not aware of some processes especially ...AAAAAAAAAAAACAAAAAAAAAA= --shared-files . What is it? and how to prevent it from running in GPU. Can I also shift /usr/bin/gnome-shell to CPU usage rather than GPU usage? I came across one such question. But it is unanswered. I also found one more thread on this topic. But it is essentially not fully answered.
Your GPU is being used for both display and compute processes; you can see which is which by looking at the “Type” column — “G” means that the process is a graphics process (using the GPU for its display), “C” means that the process is a compute process (using the GPU for computation). To move a type “G” process of the GPU, you need to stop it from displaying on the GPU, which will involve stopping the process and (if appropriate) starting it on another GPU for display purposes. As far as the ...AAAAAAAAAAAACAAAAAAAAAA= --shared-files process is concerned, you’ll have to look for it using ps to determine what it is.
How to shift process from GPU to CPU usage
1,336,906,003,000
I have a Ubuntu Version here, that is started from USB as a Live version. I do not want to install it on hard disk, because it would be too much for only testing a small thing on Ubuntu. So I started Ubuntu and installed the nvidia driver (from nvidia) for a GPU (Tesla C2050) with the following commands: sudo apt-add-repository ppa:xorg-edgers/ppa -y sudo apg-get update sudo apt-get install nvidia-346 As Ubuntu is started as Live-version, in the beginning, there was the nouveau driver activated. I want to deactivate it (maybe through rmmod or something. similar), so only the nvidia driver is activated and the GPU is using the nvidia driver. How it is possible? What can I do without rebooting the whole system (because all packages installed / removed / changes would be gone)? I have access to Ubuntu through SSH. I read that I might be helpful to type the command sudo update-initramfs -u but that command generated the output update-initramfs is disabled since running on read-only media
You need to unload the nouveau driver before you can load the nvidia driver. However, the nouveau driver is currently in use by the X-server, so it cannot be unloaded yet. You have to stop the X-server first (but don't just re-start it, as then it will use the nouveau driver again). So in short: stop X-server: sudo service lightdm stop unload the nouveau driver: sudo rmmod nouveau load the nvidia driver: sudo modprobe nvidia start the X-server: sudo service lightdm start You might be out-of-luck and the framebuffer for the console is locking the nouveau driver as well. In this case I haven't found a way to unload the driver at all...
Remove nouveau driver (nvidia) without rebooting
1,336,906,003,000
I was wondering how Linux could handle a Gamer Computer, so I have built one, but as we know GeForce does not like Linux so much as AMD, that is why I choose the last. I built up a computer with AMD Ryzen 7 1800X CPU and Radeon RX 560D GPU, as the Vega is too expensive for me to purchase, and the benchmarking said 560 is the best cost-benefit ratio currently. After some research I discovered the suffix D means it has slightly less clock speed in order to save some power consumption in comparison with RX560 without D. After countless crashes during random gaming I finally found out the problem is the GPU overheating, it's fan speed tends to follow the CPU fan speed, but of course the CPU is much less required than the GPU in some games. I partially solved the problem by customizing the fan speed based on GPU temperature instead of CPU, it is now growing gradually, and achieves the maximum speed on 50 Celsius degrees, but the problem is: on some games it holds on maximum speed all the time, and eventually still crashes. Describing the crash: the screen blinks and then became black, GPU fan stops, keyboard led blinks and then turn off, mouse the same, other CPU fan keeps, sometimes the system keeps frozen forever, sometimes the system auto reboot. As a reboot is required I could not find any tip on system logs, initially I though it was a kernel panic, but even using kdump and duplicating the kernel the system stills crashes the way I could not recover it. I do not know if Windows would have the same problem, but I strongly believe does not, I have never seen someone with the same problem on Windows, so my question is: there is some way to tell the kernel to make GPU take it easy when it is about to overheat, maybe just auto reducing the GPU clock speed?
I found the solution, there are some files on /sys/class/drm/card0/device the file pp_dpm_mclk indicates GPU memory clock, and the file pp_dpm_sclk indicates GPU core clock, mine: $ egrep -H . /sys/class/drm/card0/device/pp_dpm_* /sys/class/drm/card0/device/pp_dpm_mclk:0: 300Mhz /sys/class/drm/card0/device/pp_dpm_mclk:1: 1500Mhz * /sys/class/drm/card0/device/pp_dpm_pcie:0: 2.5GB, x8 * /sys/class/drm/card0/device/pp_dpm_pcie:1: 8.0GB, x16 /sys/class/drm/card0/device/pp_dpm_sclk:0: 214Mhz * /sys/class/drm/card0/device/pp_dpm_sclk:1: 481Mhz /sys/class/drm/card0/device/pp_dpm_sclk:2: 760Mhz /sys/class/drm/card0/device/pp_dpm_sclk:3: 1000Mhz /sys/class/drm/card0/device/pp_dpm_sclk:4: 1050Mhz /sys/class/drm/card0/device/pp_dpm_sclk:5: 1100Mhz /sys/class/drm/card0/device/pp_dpm_sclk:6: 1150Mhz /sys/class/drm/card0/device/pp_dpm_sclk:7: 1196Mhz And the file power_dpm_force_performance_level indicates the profile, which can be low, auto or manual, the default is auto, when low it runs always on lowest clock, which is not exactly what I want, so I set it to manual and made a script that keeps changing the clock according the GPU temperature, voilà, it worked! To change the clock on manual profile just write a number to file pp_dpm_sclk that represents the line, starting with 0, in my case till 7. If you are interested on my script here is it.
How to prevent GPU from overheating and auto turning off
1,336,906,003,000
I've spent a few hours trying to understand the differences between KVM and Xen without much succes. So both are Type 1 Hypervisors with comparable performances (source) and I don't understand what differenciates them. The only specific needs I have are that the guest OS isn't able to interract with the hosts's files (which seems to be the default behaviour) and that both the host and the guest can use their own GPU for video rendering. That seems not to be a problem as both Xen and KVM support some kind of "GPU/VGA/PCI pass-through" as long as there are two physical graphics cards. So what are the differences between Xen and KVM ? Is one or the other more suitable for graphical performances ? Thanks in advance for the help :)
KVM is normally supported already via libvirt and the kernels in modern distros without much hassle. You just need a CPU that has VT-d extensions (or AMD-V for AMD processors), and your BIOS has to have it enabled. After that, it's all about installing the necessary packages to get it going. XEN, yes, does support it. Xen is literally its own platform. A long time back, it was once known that Xen had the most documentation on IOMMU and VGA passthrough. Now, you'll find multitudes of users using KVM in a similar manner with high rates of success (I am one of them, using F23, GTX970 to a Windows VM for gaming). To answer your question though, the primary difference is that KVM is has had a lot of work put into it by Red Hat and the open source community, since Red Hat dropped Xen in 2009 in favor of KVM. This should in no way sway your options. You'll find varying benchmarks that show KVM outperforming Xen on numerous occasions, with KVM just being 5% slower than bare metal. Now, Xen does come with a bigger feature set than KVM out of the box, but a lot of it is for migrations and other things that you would consider "enterprise". I personally believe you should try both and see what works better for you. There are many guides to choose from and to work on, based on your distribution of choice.
Should I use KVM or Xen ? What are the main differences? [closed]
1,336,906,003,000
I am curious about how data is copied back and forth between CPU and GPU memories when I run a CUDA program. Specifically, I want to know how the Linux kernel is involved in this process. I had a few possible assumptions: The CUDA user library sees the GPU as a file and calls read(2) and write(2) for every transaction. The CUDA user library asks Linux to mmap(2) the relevant control registers (DMA registers, PIO registers and some other MMIO registers) into the user space, and then operates the the GPU however it wants in user state. Something else? I eliminated assumption #1 by running simple CUDA programs that copy data back and forth for 1000 times (and launches some empty kernels in between), where strace(1) did not observe any calls to write(2) and read(2). Assumption #2 seem possible, since I observed with time(1) that the amount of data transfered seem to scale with user time instead of sys time. So the program seem to be copying data in user state. But it seems a bit weird. How could a user program be allowed to manipulate such important I/O control registers by itself? I would be grateful for some professional ideas on this topic.
The application asks the kernel to mmap a set of buffers at startup, creating this mapping is a privileged operation. Normal operation just fills these buffers with data (such as textures, vertices or commands), and finally makes a single kernel call to start the submitted command queue. This start strobe is the only register access performed, everything else is shared memory. The GPU has its own rudimentary MMU to make sure commands cannot reference data that belongs to another context, except where desired (e.g. a compositor that combines the render target from a game with the render target from an overlay and writes the result to the on-screen buffer). For compute-only workloads, the same mechanism works fine, the command queue just doesn't end with "send data to screen" but with "return data to host".
From the Linux kernel's point of view, how does a user program communicate with a CUDA GPU?
1,336,906,003,000
I have a rendering software, to be more specific, it's a Unity3D «game» that renders video (saving rendered frames). Unfortunately Unity3D doesn't support «headless» rendering (it can run in headless mode, but in this case it doesn't render frames), so it needs an X server to create a window. I have a Debian Bullseye server with ~~Intel GPU (630)~~ NVidia GT1030 with proprietary driver I don't have any kind of display I can't plug anything like HDMI fake display device. It's performance-critical, so it must be rendered fully hardware-accelerated, so solutions like xvfb are not suitable. And I also want to run it in Docker, and sometimes I need to see what's rendering right now with VNC for debugging purposes. As I understand it, I need to: Run an X server on the host machine, creating a virtual display Share host's X server with a docker container, run my app and a VNC server there Is it the best way to do that? I've created a virtual display: Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "Unknown" HorizSync 20.0 - 120.0 VertRefresh 30.0 - 120.0 EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "NVIDIA GeForce GT 1030" Option "ConnectedMonitor" "DFP" Option "CustomEDID" "DFP-0:/etc/X11/EDID.bin" Option "ConstrainCursor" "off" BusID "PCI:01:0:0" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "TwinView" "0" Option "metamodes" "DFP-0: 1280x1024 +0+0" SubSection "Display" Depth 24 EndSubSection EndSection And started X: sudo X :0 -config /etc/X11/xorg.conf It starts without any errors, but seems hung (doesn't react to Ctrl+C, and the only way to kill it is kill -9 PID). glxinfo doesn't work: $ DISPLAY=:0 glxinfo name of display: :0 X Error of failed request: BadValue (integer parameter out of range for operation) Major opcode of failed request: 151 (GLX) Minor opcode of failed request: 24 (X_GLXCreateNewContext) Value in failed request: 0x0 Serial number of failed request: 110 Current serial number in output stream: 111 However, if I specify the display, xrandr shows its info: $ xrandr -d :0 Screen 0: minimum 8 x 8, current 1280 x 1024, maximum 32767 x 32767 DVI-D-0 connected primary 1280x1024+0+0 (normal left inverted right x axis y axis) 510mm x 290mm 1920x1080 60.00 + 59.94 50.00 60.00 50.04 1680x1050 59.95 1440x900 59.89 1280x1024 75.02* 60.02 1280x960 60.00 1280x720 60.00 59.94 50.00 1024x768 75.03 70.07 60.00 800x600 75.00 72.19 60.32 56.25 720x576 50.00 720x480 59.94 640x480 75.00 72.81 59.94 59.93 HDMI-0 disconnected (normal left inverted right x axis y axis) X server log seems fine: [ 306.770] X.Org X Server 1.20.11 X Protocol Version 11, Revision 0 [ 306.770] Build Operating System: linux Debian [ 306.770] Current Operating System: Linux home-server 5.10.0-9-amd64 #1 SMP Debian 5.10.70-1 (2021-09-30) x86_64 [ 306.770] Kernel command line: BOOT_IMAGE=/vmlinuz-5.10.0-9-amd64 root=/dev/mapper/home--server--vg-root ro quiet [ 306.770] Build Date: 13 April 2021 04:07:31PM [ 306.770] xorg-server 2:1.20.11-1 (https://www.debian.org/support) [ 306.770] Current version of pixman: 0.40.0 [ 306.770] Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. [ 306.770] Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. [ 306.770] (==) Log file: "/var/log/Xorg.0.log", Time: Thu Nov 18 21:49:50 2021 [ 306.770] (++) Using config file: "/etc/X11/xorg.conf" [ 306.770] (==) ServerLayout "Layout0" [ 306.770] (**) |-->Screen "Screen0" (0) [ 306.770] (**) | |-->Monitor "Monitor0" [ 306.770] (**) | |-->Device "Device0" [ 306.770] (**) |-->Input Device "Keyboard0" [ 306.770] (**) |-->Input Device "Mouse0" [ 306.770] (==) Automatically adding devices [ 306.770] (==) Automatically enabling devices [ 306.770] (==) Automatically adding GPU devices [ 306.770] (==) Max clients allowed: 256, resource mask: 0x1fffff [ 306.770] (WW) The directory "/usr/share/fonts/X11/cyrillic" does not exist. [ 306.770] Entry deleted from font path. [ 306.770] (WW) The directory "/usr/share/fonts/X11/100dpi/" does not exist. [ 306.770] Entry deleted from font path. [ 306.770] (WW) The directory "/usr/share/fonts/X11/75dpi/" does not exist. [ 306.770] Entry deleted from font path. [ 306.770] (WW) The directory "/usr/share/fonts/X11/Type1" does not exist. [ 306.770] Entry deleted from font path. [ 306.770] (WW) The directory "/usr/share/fonts/X11/100dpi" does not exist. [ 306.770] Entry deleted from font path. [ 306.770] (WW) The directory "/usr/share/fonts/X11/75dpi" does not exist. [ 306.770] Entry deleted from font path. [ 306.770] (==) FontPath set to: /usr/share/fonts/X11/misc, built-ins [ 306.770] (==) ModulePath set to "/usr/lib/xorg/modules" [ 306.770] (WW) Hotplugging is on, devices using drivers 'kbd', 'mouse' or 'vmmouse' will be disabled. [ 306.770] (WW) Disabling Keyboard0 [ 306.770] (WW) Disabling Mouse0 [ 306.770] (II) Loader magic: 0x562334c16e40 [ 306.770] (II) Module ABI versions: [ 306.770] X.Org ANSI C Emulation: 0.4 [ 306.770] X.Org Video Driver: 24.1 [ 306.770] X.Org XInput driver : 24.1 [ 306.770] X.Org Server Extension : 10.0 [ 306.771] (--) using VT number 3 [ 306.771] (II) systemd-logind: logind integration requires -keeptty and -keeptty was not provided, disabling logind integration [ 306.771] (II) xfree86: Adding drm device (/dev/dri/card0) [ 306.772] (--) PCI:*(1@0:0:0) 10de:1d01:1043:85f4 rev 161, Mem @ 0xa2000000/16777216, 0x90000000/268435456, 0xa0000000/33554432, I/O @ 0x00003000/128, BIOS @ 0x????????/131072 [ 306.772] (II) LoadModule: "glx" [ 306.772] (II) Loading /usr/lib/xorg/modules/extensions/libglx.so [ 306.772] (II) Module glx: vendor="X.Org Foundation" [ 306.772] compiled for 1.20.11, module version = 1.0.0 [ 306.772] ABI class: X.Org Server Extension, version 10.0 [ 306.772] (II) LoadModule: "nvidia" [ 306.772] (II) Loading /usr/lib/xorg/modules/drivers/nvidia_drv.so [ 306.773] (II) Module nvidia: vendor="NVIDIA Corporation" [ 306.773] compiled for 1.6.99.901, module version = 1.0.0 [ 306.773] Module class: X.Org Video Driver [ 306.773] (II) NVIDIA dlloader X Driver 470.86 Tue Oct 26 21:53:29 UTC 2021 [ 306.773] (II) NVIDIA Unified Driver for all Supported NVIDIA GPUs [ 306.773] (II) Loading sub module "fb" [ 306.773] (II) LoadModule: "fb" [ 306.773] (II) Loading /usr/lib/xorg/modules/libfb.so [ 306.773] (II) Module fb: vendor="X.Org Foundation" [ 306.773] compiled for 1.20.11, module version = 1.0.0 [ 306.773] ABI class: X.Org ANSI C Emulation, version 0.4 [ 306.773] (II) Loading sub module "wfb" [ 306.773] (II) LoadModule: "wfb" [ 306.773] (II) Loading /usr/lib/xorg/modules/libwfb.so [ 306.773] (II) Module wfb: vendor="X.Org Foundation" [ 306.773] compiled for 1.20.11, module version = 1.0.0 [ 306.773] ABI class: X.Org ANSI C Emulation, version 0.4 [ 306.773] (II) Loading sub module "ramdac" [ 306.773] (II) LoadModule: "ramdac" [ 306.773] (II) Module "ramdac" already built-in [ 306.773] (**) NVIDIA(0): Depth 24, (--) framebuffer bpp 32 [ 306.773] (==) NVIDIA(0): RGB weight 888 [ 306.773] (==) NVIDIA(0): Default visual is TrueColor [ 306.773] (==) NVIDIA(0): Using gamma correction (1.0, 1.0, 1.0) [ 306.773] (**) NVIDIA(0): Option "ConstrainCursor" "off" [ 306.773] (**) NVIDIA(0): Option "ConnectedMonitor" "DFP" [ 306.773] (**) NVIDIA(0): Option "CustomEDID" "DFP-0:/etc/X11/EDID.bin" [ 306.773] (**) NVIDIA(0): Option "MetaModes" "DFP-0: 1280x1024 +0+0" [ 306.773] (**) NVIDIA(0): Enabling 2D acceleration [ 306.773] (**) NVIDIA(0): ConnectedMonitor string: "DFP" [ 306.773] (II) Loading sub module "glxserver_nvidia" [ 306.773] (II) LoadModule: "glxserver_nvidia" [ 306.773] (II) Loading /usr/lib/xorg/modules/extensions/libglxserver_nvidia.so [ 306.777] (II) Module glxserver_nvidia: vendor="NVIDIA Corporation" [ 306.777] compiled for 1.6.99.901, module version = 1.0.0 [ 306.777] Module class: X.Org Server Extension [ 306.777] (II) NVIDIA GLX Module 470.86 Tue Oct 26 21:51:04 UTC 2021 [ 306.777] (II) NVIDIA: The X server supports PRIME Render Offload. [ 306.953] (--) NVIDIA(0): Valid display device(s) on GPU-0 at PCI:1:0:0 [ 306.953] (--) NVIDIA(0): DFP-0 [ 306.953] (--) NVIDIA(0): DFP-1 [ 306.953] (**) NVIDIA(0): Using ConnectedMonitor string "DFP-0". [ 306.953] (II) NVIDIA(0): NVIDIA GPU NVIDIA GeForce GT 1030 (GP108-A) at PCI:1:0:0 [ 306.953] (II) NVIDIA(0): (GPU-0) [ 306.953] (--) NVIDIA(0): Memory: 2097152 kBytes [ 306.953] (--) NVIDIA(0): VideoBIOS: 86.08.0c.00.1a [ 306.953] (II) NVIDIA(0): Detected PCI Express Link width: 4X [ 306.954] (--) NVIDIA(GPU-0): AOC 2369M (DFP-0): connected [ 306.954] (--) NVIDIA(GPU-0): AOC 2369M (DFP-0): Internal TMDS [ 306.954] (--) NVIDIA(GPU-0): AOC 2369M (DFP-0): 600.0 MHz maximum pixel clock [ 306.954] (--) NVIDIA(GPU-0): [ 306.954] (--) NVIDIA(GPU-0): DFP-1: disconnected [ 306.954] (--) NVIDIA(GPU-0): DFP-1: Internal TMDS [ 306.954] (--) NVIDIA(GPU-0): DFP-1: 165.0 MHz maximum pixel clock [ 306.954] (--) NVIDIA(GPU-0): [ 306.958] (II) NVIDIA(0): Validated MetaModes: [ 306.958] (II) NVIDIA(0): "DFP-0:1280x1024+0+0" [ 306.958] (II) NVIDIA(0): Virtual screen size determined to be 1280 x 1024 [ 306.961] (--) NVIDIA(0): DPI set to (63, 89); computed from "UseEdidDpi" X config [ 306.961] (--) NVIDIA(0): option [ 306.961] (II) NVIDIA: Reserving 24576.00 MB of virtual memory for indirect memory [ 306.961] (II) NVIDIA: access. [ 306.963] (II) NVIDIA(0): ACPI: failed to connect to the ACPI event daemon; the daemon [ 306.963] (II) NVIDIA(0): may not be running or the "AcpidSocketPath" X [ 306.963] (II) NVIDIA(0): configuration option may not be set correctly. When the [ 306.963] (II) NVIDIA(0): ACPI event daemon is available, the NVIDIA X driver will [ 306.963] (II) NVIDIA(0): try to use it to receive ACPI event notifications. For [ 306.963] (II) NVIDIA(0): details, please see the "ConnectToAcpid" and [ 306.963] (II) NVIDIA(0): "AcpidSocketPath" X configuration options in Appendix B: X [ 306.963] (II) NVIDIA(0): Config Options in the README. [ 306.975] (II) NVIDIA(0): Setting mode "DFP-0:1280x1024+0+0" [ 306.998] (==) NVIDIA(0): Disabling shared memory pixmaps [ 306.998] (==) NVIDIA(0): Backing store enabled [ 306.998] (==) NVIDIA(0): Silken mouse enabled [ 306.998] (==) NVIDIA(0): DPMS enabled [ 306.998] (WW) NVIDIA(0): Option "TwinView" is not used [ 306.998] (II) Loading sub module "dri2" [ 306.998] (II) LoadModule: "dri2" [ 306.998] (II) Module "dri2" already built-in [ 306.998] (II) NVIDIA(0): [DRI2] Setup complete [ 306.998] (II) NVIDIA(0): [DRI2] VDPAU driver: nvidia [ 306.998] (II) Initializing extension Generic Event Extension [ 306.998] (II) Initializing extension SHAPE [ 306.998] (II) Initializing extension MIT-SHM [ 306.998] (II) Initializing extension XInputExtension [ 306.999] (II) Initializing extension XTEST [ 306.999] (II) Initializing extension BIG-REQUESTS [ 306.999] (II) Initializing extension SYNC [ 306.999] (II) Initializing extension XKEYBOARD [ 306.999] (II) Initializing extension XC-MISC [ 306.999] (II) Initializing extension SECURITY [ 306.999] (II) Initializing extension XFIXES [ 306.999] (II) Initializing extension RENDER [ 306.999] (II) Initializing extension RANDR [ 306.999] (II) Initializing extension COMPOSITE [ 306.999] (II) Initializing extension DAMAGE [ 306.999] (II) Initializing extension MIT-SCREEN-SAVER [ 306.999] (II) Initializing extension DOUBLE-BUFFER [ 306.999] (II) Initializing extension RECORD [ 306.999] (II) Initializing extension DPMS [ 306.999] (II) Initializing extension Present [ 307.000] (II) Initializing extension DRI3 [ 307.000] (II) Initializing extension X-Resource [ 307.000] (II) Initializing extension XVideo [ 307.000] (II) Initializing extension XVideo-MotionCompensation [ 307.000] (II) Initializing extension SELinux [ 307.000] (II) SELinux: Disabled on system [ 307.000] (II) Initializing extension GLX [ 307.000] (II) Initializing extension GLX [ 307.000] (II) Indirect GLX disabled. [ 307.000] (II) GLX: Another vendor is already registered for screen 0 [ 307.000] (II) Initializing extension XFree86-VidModeExtension [ 307.000] (II) Initializing extension XFree86-DGA [ 307.000] (II) Initializing extension XFree86-DRI [ 307.000] (II) Initializing extension DRI2 [ 307.000] (II) Initializing extension NV-GLX [ 307.000] (II) Initializing extension NV-CONTROL [ 307.000] (II) Initializing extension XINERAMA [ 307.019] (II) config/udev: Adding input device Power Button (/dev/input/event3) [ 307.019] (II) No input driver specified, ignoring this device. [ 307.019] (II) This device may have been added with another device file. [ 307.019] (II) config/udev: Adding input device Power Button (/dev/input/event2) [ 307.019] (II) No input driver specified, ignoring this device. [ 307.019] (II) This device may have been added with another device file. [ 307.019] (II) config/udev: Adding input device Sleep Button (/dev/input/event1) [ 307.019] (II) No input driver specified, ignoring this device. [ 307.019] (II) This device may have been added with another device file. [ 307.019] (II) config/udev: Adding input device HDA NVidia HDMI/DP,pcm=3 (/dev/input/event5) [ 307.019] (II) No input driver specified, ignoring this device. [ 307.019] (II) This device may have been added with another device file. [ 307.019] (II) config/udev: Adding input device HDA NVidia HDMI/DP,pcm=7 (/dev/input/event6) [ 307.019] (II) No input driver specified, ignoring this device. [ 307.019] (II) This device may have been added with another device file. [ 307.020] (II) config/udev: Adding input device HDA NVidia HDMI/DP,pcm=8 (/dev/input/event7) [ 307.020] (II) No input driver specified, ignoring this device. [ 307.020] (II) This device may have been added with another device file. [ 307.020] (II) config/udev: Adding input device HDA NVidia HDMI/DP,pcm=9 (/dev/input/event8) [ 307.020] (II) No input driver specified, ignoring this device. [ 307.020] (II) This device may have been added with another device file. [ 307.020] (II) config/udev: Adding input device HDA NVidia HDMI/DP,pcm=10 (/dev/input/event9) [ 307.020] (II) No input driver specified, ignoring this device. [ 307.020] (II) This device may have been added with another device file. [ 307.020] (II) config/udev: Adding input device ASRock LED Controller (/dev/input/event0) [ 307.020] (II) No input driver specified, ignoring this device. [ 307.020] (II) This device may have been added with another device file. [ 307.020] (II) config/udev: Adding input device ASRock LED Controller (/dev/input/js0) [ 307.020] (II) No input driver specified, ignoring this device. [ 307.020] (II) This device may have been added with another device file. [ 307.021] (II) config/udev: Adding input device HDA Intel PCH Front Mic (/dev/input/event10) [ 307.021] (II) No input driver specified, ignoring this device. [ 307.021] (II) This device may have been added with another device file. [ 307.021] (II) config/udev: Adding input device HDA Intel PCH Rear Mic (/dev/input/event11) [ 307.021] (II) No input driver specified, ignoring this device. [ 307.021] (II) This device may have been added with another device file. [ 307.021] (II) config/udev: Adding input device HDA Intel PCH Line (/dev/input/event12) [ 307.021] (II) No input driver specified, ignoring this device. [ 307.021] (II) This device may have been added with another device file. [ 307.021] (II) config/udev: Adding input device HDA Intel PCH Line Out (/dev/input/event13) [ 307.021] (II) No input driver specified, ignoring this device. [ 307.021] (II) This device may have been added with another device file. [ 307.021] (II) config/udev: Adding input device HDA Intel PCH Front Headphone (/dev/input/event14) [ 307.021] (II) No input driver specified, ignoring this device. [ 307.021] (II) This device may have been added with another device file. [ 307.021] (II) config/udev: Adding input device PC Speaker (/dev/input/event4) [ 307.021] (II) No input driver specified, ignoring this device. [ 307.021] (II) This device may have been added with another device file. [ 390.739] (II) NVIDIA(0): ACPI: failed to connect to the ACPI event daemon; the daemon [ 390.739] (II) NVIDIA(0): may not be running or the "AcpidSocketPath" X [ 390.739] (II) NVIDIA(0): configuration option may not be set correctly. When the [ 390.739] (II) NVIDIA(0): ACPI event daemon is available, the NVIDIA X driver will [ 390.739] (II) NVIDIA(0): try to use it to receive ACPI event notifications. For [ 390.739] (II) NVIDIA(0): details, please see the "ConnectToAcpid" and [ 390.739] (II) NVIDIA(0): "AcpidSocketPath" X configuration options in Appendix B: X [ 390.739] (II) NVIDIA(0): Config Options in the README. [ 390.739] (--) NVIDIA(GPU-0): AOC 2369M (DFP-0): connected [ 390.739] (--) NVIDIA(GPU-0): AOC 2369M (DFP-0): Internal TMDS [ 390.739] (--) NVIDIA(GPU-0): AOC 2369M (DFP-0): 600.0 MHz maximum pixel clock [ 390.739] (--) NVIDIA(GPU-0): [ 390.739] (--) NVIDIA(GPU-0): DFP-1: disconnected [ 390.739] (--) NVIDIA(GPU-0): DFP-1: Internal TMDS [ 390.739] (--) NVIDIA(GPU-0): DFP-1: 165.0 MHz maximum pixel clock [ 390.739] (--) NVIDIA(GPU-0): [ 390.760] (II) NVIDIA(0): Setting mode "DFP-0:1280x1024+0+0" [ 390.781] (==) NVIDIA(0): Disabling shared memory pixmaps [ 390.781] (==) NVIDIA(0): DPMS enabled [ 390.781] (II) Loading sub module "dri2" [ 390.781] (II) LoadModule: "dri2" [ 390.781] (II) Module "dri2" already built-in [ 390.781] (II) NVIDIA(0): [DRI2] Setup complete [ 390.781] (II) NVIDIA(0): [DRI2] VDPAU driver: nvidia [ 390.781] (II) Initializing extension Generic Event Extension [ 390.781] (II) Initializing extension SHAPE [ 390.781] (II) Initializing extension MIT-SHM [ 390.781] (II) Initializing extension XInputExtension [ 390.781] (II) Initializing extension XTEST [ 390.781] (II) Initializing extension BIG-REQUESTS [ 390.782] (II) Initializing extension SYNC [ 390.782] (II) Initializing extension XKEYBOARD [ 390.782] (II) Initializing extension XC-MISC [ 390.782] (II) Initializing extension SECURITY [ 390.782] (II) Initializing extension XFIXES [ 390.782] (II) Initializing extension RENDER [ 390.782] (II) Initializing extension RANDR [ 390.782] (II) Initializing extension COMPOSITE [ 390.782] (II) Initializing extension DAMAGE [ 390.782] (II) Initializing extension MIT-SCREEN-SAVER [ 390.782] (II) Initializing extension DOUBLE-BUFFER [ 390.782] (II) Initializing extension RECORD [ 390.782] (II) Initializing extension DPMS [ 390.782] (II) Initializing extension Present [ 390.783] (II) Initializing extension DRI3 [ 390.783] (II) Initializing extension X-Resource [ 390.783] (II) Initializing extension XVideo [ 390.783] (II) Initializing extension XVideo-MotionCompensation [ 390.783] (II) Initializing extension SELinux [ 390.783] (II) SELinux: Disabled on system [ 390.783] (II) Initializing extension GLX [ 390.783] (II) Initializing extension GLX [ 390.783] (II) Indirect GLX disabled. [ 390.783] (II) GLX: Another vendor is already registered for screen 0 [ 390.783] (II) Initializing extension XFree86-VidModeExtension [ 390.783] (II) Initializing extension XFree86-DGA [ 390.783] (II) Initializing extension XFree86-DRI [ 390.783] (II) Initializing extension DRI2 [ 390.783] (II) Initializing extension NV-GLX [ 390.783] (II) Initializing extension NV-CONTROL [ 390.783] (II) Initializing extension XINERAMA [ 390.801] (II) config/udev: Adding input device Power Button (/dev/input/event3) [ 390.801] (II) No input driver specified, ignoring this device. [ 390.801] (II) This device may have been added with another device file. [ 390.801] (II) config/udev: Adding input device Power Button (/dev/input/event2) [ 390.801] (II) No input driver specified, ignoring this device. [ 390.801] (II) This device may have been added with another device file. [ 390.802] (II) config/udev: Adding input device Sleep Button (/dev/input/event1) [ 390.802] (II) No input driver specified, ignoring this device. [ 390.802] (II) This device may have been added with another device file. [ 390.802] (II) config/udev: Adding input device HDA NVidia HDMI/DP,pcm=3 (/dev/input/event5) [ 390.802] (II) No input driver specified, ignoring this device. [ 390.802] (II) This device may have been added with another device file. [ 390.802] (II) config/udev: Adding input device HDA NVidia HDMI/DP,pcm=7 (/dev/input/event6) [ 390.802] (II) No input driver specified, ignoring this device. [ 390.802] (II) This device may have been added with another device file. [ 390.802] (II) config/udev: Adding input device HDA NVidia HDMI/DP,pcm=8 (/dev/input/event7) [ 390.802] (II) No input driver specified, ignoring this device. [ 390.802] (II) This device may have been added with another device file. [ 390.802] (II) config/udev: Adding input device HDA NVidia HDMI/DP,pcm=9 (/dev/input/event8) [ 390.802] (II) No input driver specified, ignoring this device. [ 390.802] (II) This device may have been added with another device file. [ 390.803] (II) config/udev: Adding input device HDA NVidia HDMI/DP,pcm=10 (/dev/input/event9) [ 390.803] (II) No input driver specified, ignoring this device. [ 390.803] (II) This device may have been added with another device file. [ 390.803] (II) config/udev: Adding input device ASRock LED Controller (/dev/input/event0) [ 390.803] (II) No input driver specified, ignoring this device. [ 390.803] (II) This device may have been added with another device file. [ 390.803] (II) config/udev: Adding input device ASRock LED Controller (/dev/input/js0) [ 390.803] (II) No input driver specified, ignoring this device. [ 390.803] (II) This device may have been added with another device file. [ 390.803] (II) config/udev: Adding input device HDA Intel PCH Front Mic (/dev/input/event10) [ 390.803] (II) No input driver specified, ignoring this device. [ 390.803] (II) This device may have been added with another device file. [ 390.803] (II) config/udev: Adding input device HDA Intel PCH Rear Mic (/dev/input/event11) [ 390.803] (II) No input driver specified, ignoring this device. [ 390.803] (II) This device may have been added with another device file. [ 390.804] (II) config/udev: Adding input device HDA Intel PCH Line (/dev/input/event12) [ 390.804] (II) No input driver specified, ignoring this device. [ 390.804] (II) This device may have been added with another device file. [ 390.804] (II) config/udev: Adding input device HDA Intel PCH Line Out (/dev/input/event13) [ 390.804] (II) No input driver specified, ignoring this device. [ 390.804] (II) This device may have been added with another device file. [ 390.804] (II) config/udev: Adding input device HDA Intel PCH Front Headphone (/dev/input/event14) [ 390.804] (II) No input driver specified, ignoring this device. [ 390.804] (II) This device may have been added with another device file. [ 390.804] (II) config/udev: Adding input device PC Speaker (/dev/input/event4) [ 390.804] (II) No input driver specified, ignoring this device. [ 390.804] (II) This device may have been added with another device file. Where is the problem?
My xorg.conf is like this Section "ServerLayout" Identifier "Default Layout" Screen 0 "Screen0" 0 0 InputDevice "Mouse0" "CorePointer" InputDevice "Keyboard0" "CoreKeyboard" EndSection Section "InputDevice" Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/input/mice" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" Identifier "Keyboard0" Driver "kbd" Option "XkbModel" "pc105" Option "XkbLayout" "us" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "Unknown" HorizSync 20.0 - 120.0 VertRefresh 30.0 - 120.0 EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "Quadro FX 380" Option "ConnectedMonitor" "DFP" Option "UseDisplayDevice" "DFP-0" Option "CustomEDID" "DFP-0:/etc/X11/HPZ24nq.bin" BusID "PCI:21:0:0" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "TwinView" "0" Option "metamodes" "DFP-0: 1280x1024 +0+0" SubSection "Display" Depth 24 EndSubSection EndSection The parameters that need changes for your computer are at least BusID, DFP, DFP-0, /etc/X11/HPZ24nq.bin. I used edid file HPZ24nq.bin, which I got from some monitor. You will be able to set resolutions that are supported in an EDID file. You can get EDID file from monitor with read-edid. BusID you can get with lspci. I am not sure if you need that.
Best way to do rendering on Linux server with GPU but without a display?
1,492,630,278,000
What does this mean, what is hyst? "hyst = -273.1°C" $ sensors k10temp-pci-00c3 Adapter: PCI adapter Tctl: +45.0°C Tdie: +45.0°C Tccd1: +45.2°C nvme-pci-0100 Adapter: PCI adapter Composite: +48.9°C (low = -5.2°C, high = +83.8°C) (crit = +87.8°C) amdgpu-pci-0400 Adapter: PCI adapter vddgfx: 900.00 mV fan1: 1045 RPM (min = 0 RPM, max = 3200 RPM) edge: +51.0°C (crit = +94.0°C, hyst = -273.1°C) power1: 38.01 W (cap = 135.00 W)
“hyst” stands for hysteresis. In a sensor’s configuration, it’s the threshold below which the temperature must return for a sensor to no longer be considered critical. In your case, if the “edge” sensor reports a temperature of 94°C or above, it will be considered critical; once the sensor is critical, it will only become non-critical if it reports a temperature of -273.1°C or below (which can’t happen). See the “Thermal Hysteresis Mechanism” section in the sensors.conf manpage for details: Many monitoring chips do not handle the high and critical temperature limits as simple limits. Instead, they have two values for each limit, one which triggers an alarm when the temperature rises and another one which clears the alarm when the temperature falls. The latter is typically a few degrees below the former. This mechanism is known as hysteresis.
lm-sensors meaning of output
1,492,630,278,000
I recently purchased a new laptop and installed openSUSE Tumbleweed on it. The laptop has an Intel Core i5 processor with integrated graphics and an NVIDIA 3050 Ti. My goal is to configure Xorg to run on the integrated GPU and disable the NVIDIA GPU when it's not required (to save power, as it consumes around 6 watts). To achieve this, I used prime-select to set the offload mode by running the command sudo prime-select offload. However, I encountered a problem where Xorg is still running on the NVIDIA GPU. When I checked the output of nvidia-smi, I received the following information: Sun May 28 10:00:02 2023 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 525.116.04 Driver Version: 525.116.04 CUDA Version: 12.0 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 NVIDIA GeForce ... Off | 00000000:01:00.0 Off | N/A | | N/A 42C P8 6W / 30W | 5MiB / 4096MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | 0 N/A N/A 3246 G /usr/bin/Xorg.bin 4MiB | +-----------------------------------------------------------------------------+ Additionally, when I checked the task manager, it indicated that Xorg should not be running on the NVIDIA GPU. I have examined the xorg.conf file located at /etc/X11/xorg.conf and it contains the following configuration: Section "ServerLayout" Identifier "layout" Screen "intel" Option "AllowNVIDIAGPUScreens" EndSection Section "Device" Identifier "intel" Driver "modesetting" BusID "PCI:0:2:0" EndSection Section "Screen" Identifier "intel" Device "intel" EndSection Section "ServerFlags" Option "AutoAddGPU" "false" EndSection # needed for NVIDIA PRIME Render Offload Section "Device" Identifier "nvidia" Driver "nvidia" BusID "PCI:1:0:0" EndSection I apologize if any necessary information is missing. Please let me know if you require any additional details. This is my first time working with a graphics card, so any guidance would be appreciated.
OHHHHHH, YESSSSS. I SOLVED IT! I simply added GPUDevice "intel" to the xorg.conf file! Now there are no processes running on the NVIDIA GPU! It doesn't go into low power state but I think I will figure it out. I saw it in the logs that it used nvidia as the GPUDevice. Edit: I eventually got the PRIME offloading working thanks to this tutorial: https://wiki.archlinux.org/title/PRIME
Dual GPU Setup: Xorg on Intel Integrated GPU, NVIDIA GPU for Gaming
1,492,630,278,000
I'm trying to get the optirun command to work with the FOSS Nouveau drivers on my computer that has an embeddded graphics unit and a discrete graphics processing unit. Here's my setup provided by the lspci | egrep -i 'vga|3d'command: 00:02.0 VGA compatible controller: Intel Corporation Skylake GT2 [HD Graphics 520] (rev 07) 01:00.0 3D controller: NVIDIA Corporation GK208BM [GeForce 920M] (rev a1) According to the Nouveau CodeNames website page, my GPU is supported by the NV108 (GK208) Nouveau driver. So there's no reason why I can't make it work with the optirun command, right? However, after having followed the classic installation procedure uninstall proprietary drivers install bumblebee & mesa-utils packages install VirtualGL I can't get the optirun command to work. As an example, optirun glxgears gives the error [ERROR]Cannot access secondary GPU - error: [XORG] (EE) [ERROR]Aborting because fallback start is disabled The problem seems to be with the Nouveau module in the kernel: $ optirun -vv glxgears ---------------------- [DEBUG]Reading file: /etc/bumblebee/bumblebee.conf [DEBUG]optirun version 3.2.1 starting... [DEBUG]Active configuration: [DEBUG] bumblebeed config file: /etc/bumblebee/bumblebee.conf [DEBUG] X display: :8 [DEBUG] LD_LIBRARY_PATH: [DEBUG] Socket path: /var/run/bumblebee.socket [DEBUG] Accel/display bridge: auto [DEBUG] VGL Compression: proxy [DEBUG] VGLrun extra options: [DEBUG] Primus LD Path: /usr/lib/x86_64-linux-gnu/primus:/usr/lib/i386-linux-gnu/primus:/usr/lib/primus:/usr/lib32/primus [DEBUG]Using auto-detected bridge virtualgl [INFO]Response: No - error: [XORG] (EE) [ERROR]Cannot access secondary GPU - error: [XORG] (EE) [DEBUG]Socket closed. [ERROR]Aborting because fallback start is disabled. [DEBUG]Killing all remaining processes. What I tried I tried to force Optimus to use the Nouveau drivers in the /etc/bumblebee/bumblebee.conf by setting Driver=nouveau. It makes no difference. What I fixed Initially I had another error while executing the command: [ERROR]Cannot access secondary GPU - error: [XORG] (EE) [ERROR]Failed to load module "mouse" (module does not exist, 0) I fixed it by installing the missing package xserver-xorg-input-mouse.
I finally found a solution to my problem by continuing my research. Solution: do not use Optimus to switch between GPU The Primus and Optimus programs are made to be used with Nvidia proprietary drivers. It is therefore not recommended to use them with Nouveau drivers. The Linux kernel has tools that allow you to switch GPUs without installing additional programs. The tool in question is VGA Switcheroo. Note that this tool only works with open source drivers. The tool may not be active by default on your system, some manipulations are then necessary. To check if the tool is enabled, look for the switch file with # cat /sys/kernel/debug/vgaswitcheroo/switch In my case, the tool was not activated, I just had to uninstall Bumblebee to fix the problem. If the problem persists after uninstalling Bumblebee, follow the instructions in this article. Now that vga_switcheroo is enabled, you can switch off the active GPU with # echo OFF > /sys/kernel/debug/vgaswitcheroo/switch and activate the dedicated card with # echo DIS > /sys/kernel/debug/vgaswitcheroo/switch or activate the integrated card with # echo IGD > /sys/kernel/debug/vgaswitcheroo/switch References HybridGraphics - Communauty Help Wiki by Ubuntu VGA Switcheroo - Linux Kernel Documentation VGA_Switcheroo by Chibi-nah
Nvidia Optimus with Nouveau drivers
1,492,630,278,000
using the following line I have been able to see processes that use the GPU some of which are mentioned python under the COMMAND column. sudo fuser -v /dev/nvidia* which prints: USER PID ACCESS COMMAND /dev/nvidia0: root 1197 F...m Xorg alireza 1451 F...m gnome-shell alireza 5527 F...m python alireza 5567 F.... python alireza 5568 F.... python how can I kill all the processes which are mentiond python in the COMMAND collumn. But so far i have to do it manually by for each PID by sudo kill -9 <pid> which is not easy if there are many of them. is there a way to automate this and make it faster? like writing only one command and kill all the PIDs that has pyton in the COMMAND column?
EDIT Here is a one-liner that should kill all python processes using /dev/nvidia*: sudo fuser -v /dev/nvidia* 2>&1 | grep python | grep -o -E " [0-9]+ " | xargs kill The 2>&1 redirection is necessary because of how fuser outputs its results. grep python will select all lines contain python, then grep -o -E " [0-9]+ " will extract the PIDs and xargs kill will kill all of them. Please run sudo fuser -v /dev/nvidia* 2>&1 | grep python first to verify if no unwanted processes were selected by mistake. ORIGINAL ANSWER The following command will display processes using the hardware files /dev/nvidia* and a prompt will ask you if you want to kill them one by one: $ sudo fuser -ikv /dev/nvidia* USER PID ACCESS COMMAND /dev/nvidia0: root 1197 F...m Xorg alireza 1451 F...m gnome-shell alireza 5527 F...m python alireza 5567 F.... python alireza 5568 F.... python Kill process 1191 ? (y/N) N Kill process 1451 ? (y/N) N Kill process 5527 ? (y/N) y ... This isn't a one-liner that kills all python commands (should be possible using fuser | grep | cut | kill), but it is faster than typing each PID by hand.
kill processes shown by sudo fuser filtered by COMMAND column
1,492,630,278,000
This is with Ubuntu 18.04 on a Dell 7920. So the Dell power light goes on but I get nothing aside from the occasional flickering of the HD light, trying to boot when I put the 2080 in my box. (I have two actually and same thing with either one.) Everything boots fine when the RTX comes out. The monitor is plugged into an NVIDIA P2000 which came with the system. Same results before and after updating to most recent driver (415), which is explicitly supposed to support the 2080--though the problem happens so early on it seems unlikely to be a driver issue. I'm not even getting the Dell logo. It's a fresh system, but legacy boot has been enabled and a few other changes have been made to ensure the HD can be encrypted. The computer booted with the 2080 before these changes were made, but there's probably no going back since the disk encryption is mandatory. (And the IT guy started his vacation today.) I've even tried different PCIe lanes, no change. What can I try? UPDATE: So on a hunch (anticipated by @SiXandSeven8ths) I took out the P2000 Quadro and things booted just fine, again with the 415.23 NVIDIA drivers. nvidia-smi works and everything appears in order. Is it really impossible to have both cards in the system at the same time (with nvidia drivers)? I'm happy to lose fancy graphics from the Quadro if need be, I just need it to run 2 monitors for mostly coding while the 2080s do ML. This post for example suggests that it is possible (see iamacow's response, 1st para.)--any suggestions how? UPDATE 2: So it turned out to be surprisingly difficult to get things going even with a new low-end graphics card (GeForce 1050Ti). Two 2080Ti's did not work immediately; booting hung when the OS got to the point where it started the Gnome Display Manager. (Incidentally, for CUDA 10, needed to go with driver 410.48, manually installed from a .run file.) Reinstalling drivers did nothing, but everything 'just worked' when I switched from GDM to LightDM. To get the monitors to display from the new card, I had to get the GPU ID numbers from NVIDIA X Server Settings (different from what they are in nvidia-smi!!) and manually edit xorg.conf accordingly; doing it from the Settings GUI didn't seem to work, at least on the first try. But now, everything appears to be working as intended. Last note: A fun fact about the 7920 is that despite packing a 1400W PSU it only has 3 PCIe 8-pin drops, while for 2 RTX's you need 4. A splitter solved this and since these cards max at about 280W I'm not too worried about overloading anything, though full disclosure the system has not been fully 'stress-tested' yet.
Following @K7AAY's suggestion and making this an answer. On a hunch (anticipated by @SiXandSeven8ths) I took out the P2000 Quadro and things booted just fine, again with the 415.23 NVIDIA drivers. nvidia-smi works and everything appears in order. I've since installed Cuda 10, CuDNN 7, PyTorch, and everything seems well. So the 'solution' here seems to be: you can't mix graphics card models in a single system, at least not without manual intervention of some kind. (E.g., editing an Xorg.conf file or something like that.). I'll need to replace the P2000 with a low-end GeForce model to achieve my goal of a lesser card for video display and two 'killer' cards for doing ML.
Black screen after inserting RTX 2080 Ti, computer won't boot at all
1,492,630,278,000
I have installed the Lightworks video editor on Debian Jessie. For best performance it needs to run on a discrete video card with the proprietary driver. A Nvidia GTX 860M in my case. I have installed Bumblebee to switch between video cards as needed. With optirun or primusrun it is possible to run an application using the Nvidia card. When I use optirun for Lightworks it crashes after startup. When I use primusrun it doesn't and performance is okay. Why is that? What is the difference between the two? This question has been asked before, but remains unanswered. This answer on a different question does allude to a difference, but doesn't explain it.
Bumblebee, earlier, was using VirtualGL as its core and now switched to Primus technology. optirun uses VirtualGL and primusrun uses the Primus technology. That must be the reason. Note: Though the question was posted long back, the answer would help those want to understand the difference.
What is the difference between optirun and primusrun (bumblebee)
1,492,630,278,000
I have found a benchmark from computerbase.de (http://www.computerbase.de/artikel/grafikkarten/2013/intel-haswell-grafik-fuer-desktop-pcs-im-test/3/ in german) where in one case a task (here video transcoding) is done by the CPU and in the other case by the (integrated) GPU: How can I assign a task (for example video transcoding) explicitly to the GPU in linux?
I think the best way to make use of your cores in GPU is to use OpenCL. The idea is quite simple. You write a kernel (a small block of code, where you can use only basic C code without libraries). For example, if you want to filter a frame, you have to do some calculations on each pixel and that is what the kernel code will do. Then you have to compile the kernel, allocate the memory on GPU and copy data from the main memory there. You also have to allocate memory for the result. Then you create threads and send the kernel into execution on GPU (I think you can also use both CPU and GPU cores at once to execute kernels). So each thread executes the kernel once for every pixel. After you copy the result back to main memory and continue working with the CPU. This is the simplest way I can explain this, but there is still a million details you have to know, so you better start learning ;) You can start with your graphics card manufacturer developer web site. There you will find the libraries and tutorials on how to start developing in OpenCL.
How to assign tasks to GPU
1,492,630,278,000
My laptop is a normal, uninteresting machine with two standard, unmultiplexed GPUs and an ordinary Debian stretch installation. The secondary GPU (a Radeon) is usually powered down but I can activate and use it by (for example) DRI_PRIME=1 glxgears. Mesa's source file src/loader/loader.c manages it. Is DRI_PRIME undocumented? I wish to read the documentation but cannot find it. Oddly, it isn't here. Moreover, Google cannot locate it. If you know where the documentation is, would you tell? Switching GPUs is a fairly important system function. One would think that the mechanism that does it would be thoroughly documented, but all I can find are a few oblique changelog entries and some online lore like this. ADDITIONAL INFORMATION You won't need Debian to answer my question. Any Linux should do. In case a reader who wishes to start to learn about GPU switching stumbles in here, he can try sudo cat /sys/kernel/debug/vgaswitcheroo/switch and then read html/newstyle/gpu/vga-switcheroo.html in the Linux kernel source. Also, man 8 lspci. It took me two hours to figure out that much, so I mention it here to save the reader time. Meanwhile, where is the proper documentation of Mesa environment variables like DRI_PRIME, please?
I wrote up some notes at https://robots.org.uk/LinuxMultiGPUDeviceSelection - these are not complete but could be used as the basis for a more complete answer if someone wants to write one. :)
The standard mechanism to switch GPUs isn't undocumented, is it?
1,492,630,278,000
I have two GPU's, GTX 1070 and GT 710. I have only one display and I would like this display to run off of the GT710 so that I can continue to work when I am training models using CUDA. I have been at this for quite a few hours and the furthest I have been able to get is to boot into Mint in "fallback mode" with the monitor connected to the GT 710. I have been following the instructions here: https://forums.developer.nvidia.com/t/how-do-i-set-one-gpu-for-display-and-the-other-two-gpus-for-cuda-computing/49113 My system information is as follows I have tried two methods 1)First Attempt: As suggested by user "birdie" the link above, I created the file nvidia.conf in directory /etc/X11/xorg.conf.d, with the following content Section "Device" Identifier "GT710" BusID "PCI:5:0:0" # my Bus ID for gt710 Driver "nvidia" VendorName "NVIDIA" EndSection Then I went to xorg.conf in /etc/X11/ and modified the entry for screen as follows Section "Screen" Identifier "Screen0" Device "GT710" #modified here Monitor "Monitor0" DefaultDepth 24 Option "Stereo" "0" Option "nvidiaXineramaInfoOrder" "DFP-6" Option "metamodes" "2560x1440_75 +0+0" Option "SLI" "Off" Option "MultiGPU" "Off" Option "BaseMosaic" "off" SubSection "Display" Depth 24 EndSubSection EndSection By doing this I was able to boot into mint in fallback mode with my display connected to the GT710. 2)Second Attempt: I created a second device entry in /etc/X11/xorg.conf Section "Device" Identifier "Device1" Driver "nvidia" VendorName "NVIDIA Corporation" BusID "PCI:5:0:0" BoardName "GeForce GT 710" option "AllowEmptyInitialConfiguration" EndSection Then I edited the Screen entry in /etc/X11/xorg.conf as follows: Section "Screen" Identifier "Screen0" Device "Device1" #edited here Monitor "Monitor0" DefaultDepth 24 Option "Stereo" "0" Option "nvidiaXineramaInfoOrder" "DFP-6" Option "metamodes" "2560x1440_75 +0+0" Option "SLI" "Off" Option "MultiGPU" "Off" Option "BaseMosaic" "off" SubSection "Display" Depth 24 EndSubSection EndSection Again I was able to boot into mint but only in fallback mode when connected to the GT710. I would appreciate any help in making this work. thank you
I solved the problem. The solution is to use approach #2 and edit /etc/X11/xorg.conf/ to add second GPU as shown above. Then under Section "screen" change "MultiGPU" to "on" more details can be seen hereenter link description here I will post my new xorg.con in case it helps anyone in the future # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 440.82 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "0" EndSection Section "Files" EndSection Section "Module" Load "dbe" Load "extmod" Load "type1" Load "freetype" Load "glx" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" # HorizSync source: edid, VertRefresh source: edid Identifier "Monitor0" VendorName "Unknown" ModelName "Philips PHL 325E1" HorizSync 114.0 - 114.0 VertRefresh 48.0 - 75.0 Option "DPMS" EndSection #makes gtx 1070 work on display Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTX 1070" EndSection #added to make GT710 run my Display doesnt work Section "Device" Identifier "Device1" Driver "nvidia" VendorName "NVIDIA Corporation" BusID "PCI:5:0:0" option "AllowEmptyInitialConfiguration" EndSection Section "Screen" Identifier "Screen0" Device "Device1" Monitor "Monitor0" DefaultDepth 24 Option "Stereo" "0" Option "nvidiaXineramaInfoOrder" "DFP-6" Option "metamodes" "2560x1440_75 +0+0" Option "SLI" "Off" Option "MultiGPU" "on" #"Off" #CHANGE APPLIED HERE Option "BaseMosaic" "off" SubSection "Display" Depth 24 EndSubSection EndSection
How do I use my secondary GPU for display output and primary only for computation?
1,492,630,278,000
I have a supermicro server running ubuntu server 14.04, and I would like to install a Quadro 400 (for display) a Nvidia GTX 295 and a Nvidia K80 however, when I install the driver for the K80 the Quadro 400 and Nvidia GTX 295 do not appear in nvidia-smi When I try to install the drivers for the GTX 295 (which seem to be the same as the Quadro 400) from the nvidia website, it says that it needs to de-install the previously installed driver (even though the driver was for the K80 and not the GTX 295) Any chance anybody has had this problem before ? and knows how to install and detect multiple-GPUs. I have also created a (based on my previous searches) i have created a file called blacklist-nouveau.conf in /etc/modprobe.d/ containing the following blacklist nouveau blacklist lbm-nouveau options nouveau modeset=0 alias nouveau off alias lbm-nouveau off At the moment when running nvidia-smi (and after having tried to install all drivers) i get the following message Failed to initialize NVML: Unknown Error Thanks
This is the solution I re-installed Ubuntu server 14.04 I followed point 1-2 and 3 from the official documentation cuda-getting-started-guide-for-linux I ran nvidia-smi which only showed me the K80 I unplugged the K80 I installed the drivers for the gtx295 and quadro 400 manually sudo apt-get install nvidia-340 I replugged the K80 Restarted the system and ran Nvidia-smi (showing all graphic card but nothing seem to be accessible via CUDA code or Nsight) so I re-ran (hoping the drivers of the GTX and Quadro would not be removed) sudo apt-get install cuda-drivers Restarted the server (at this point Nvidia-smi) only shows me the K80 .... again ! I finally installed sudo apt-get install nvidia-cuda-toolkit I restarted the server and yup, it worked, they are all detected and all GPUs are available. All cards now appear in Nvidia-smi although I seem to have gained a graphical interface too, which is strange as I did not installed it, but fair does. I'll see if it works now.
Multi GPU Supercomputer