date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,304,322,391,000 |
I'd like to know whether binaries use (are compiled for) special instruction sets like SSE 4.1/4.2, AVX, F16C or not. How can I find out whether a binary in a package uses certain instruction sets?
I know that I may enable such instructions using configure switches when compiling packages by hand, but when using precompiled packages from the Debian repository there must be a default.
Probably binaries are not compiled with too specific instruction sets because they could not be used on any system or, depending on the binary, they are compiled with alternative subroutines that "emulate" the processing with basic instructions to support CPUs lacking such features.
I know that I could look into the rules file of a Debian source package but I'm interested if there's an easier way to do this.
Are the CPU instructions a x86-64 binary uses very limited?
May packages use quite specific instruction sets, may they have fallbacks using more primitive instructions?
|
To answer your question generally, https://superuser.com/questions/726395/how-to-check-if-a-binary-requires-sse4-or-avx-on-linux gives a script which can be fed a disassembled binary (the output of objdump) and will try to figure out the minimum required instruction set. This technique is necessarily approximate, since a given binary may have different execution paths depending on the available instruction set, and such a binary would appear to require the "best" instruction set it supports, even though it can actually run without it.
Specifically regarding Debian packages, anything packaged for amd64 (the 64-bit "PC" instruction set) must be able to run on any AMD64 or x86-64-compatible CPU; this means it can use the full standard 64-bit instruction set, which includes SSE2, but it can only use other features (including those you list) if it provides fallbacks for CPUs which don't support them. There are exceptions, but the packages should indicate that in the package description (see for example rr).
The Linux kernel itself can adapt its features to those of the CPU it finds itself running on. If it's compiled for x86-64, it assumes that it can use a TSC, CMPXCHG64, CMOV (look for anything that depends on X86_64 at that link). But it can be compiled to check at run-time whether such features as AES-NI, AVX, AVX2 are available, and use them in particular to accelerate crypto calculations. This may benefit any program using these features (at the cost of context switches into the kernel though).
In some cases the dynamic linker can also help provide alternatives depending on the CPU: thus libc6-i686 provides a set of libraries for the i386 architecture which will be used automatically instead of the bare-bones equivalents on CPUs compatible with the i686 instruction set (and supporting CMOV).
| Debian: what instructions do x86-64 binaries use? |
1,304,322,391,000 |
I was inspecting the preprocessed output of my C program and happened to look at the header file wordsize.h
It is located in
/usr/include/i386-linux-gnu/bits/wordsize.h
the file contains only one macro
#define __WORDSIZE 32
My question is, is the wordsize then being decided by the compiler that is installed or does it has something to do with the OS that I have installed (32 bit or 64 bit) or does it have something to do with the Hardware configuration of my machine.
I am new to development under Linux.
|
In general wordsize is decided upon target architecture when compiling. Your compiler will normally compile using wordsize for current system.
Using gcc (among others) you can also tune this by using various flags. E.g. on a 64-bit host you can compile for 32-bit machine, or force 32-bit words.
-m32 # int, long and pointer to 32 bits, generates code for i386.
-m64 # int, long and pointer to 64 bits, generates code for x86-64.
-mx32 # int, long and pointer to 32 bits, generates code for x86-64.
You should also look at limits.h and inttypes.h to see the usage of this
definition.
For cross-compiling check out multilib (32-bit link on SO) and search tha web.
Check what flags your GCC was built with by:
gcc -v
As to the sizes they are usually closely related to the central processing unit
and related – such as maximum size of a memory address, size of CPU registers
etc.
For a quick glimpse, you do not need to understand much of this, but depending
on where you are it can give some insight:
If you use gcc and compile with the -S flag you can also look at the assembly instructions. Here, a bit confusing, on e.g. 32-bit machine a
word is 16 bit and long is 32-bit. (__WORDSIZE)
So e.g. movl $123, %eax means move long (32-bit - __WORDSIZE) 123 to eax register, and movw means move word (16-bit).
This is naming conventions, – and only to say that WORDSIZE can mean more
then one thing. You can also come across code where they e.g. define something
like
#define WORD_SIZE 16
as it all depends on context. If you read data from a file or stream where
source has word-size of 16 bit, this would be natural. Only to point out that
do not always assume word-size means __WORDSIZE when read it in code.
Example with user defined WORD_SIZE would not affect instruction set in the
generated machine code. For GCC in general I would recommend this book. (Unfortunately it is a bit old – but have yet to find a similar easy to read more up to date book. (Not that I have looked that hard.) It is short, concise and sweet. And if you only keep in
mind that things can have changed, such as added features etc., it gives good introduction non the less.)
It gives a quick and nice introduction of the various aspects when compiling.
Look at chapter 11 for a nice compile chain explanation.
I do not know of any options in GCC to compile 16-bit. One way to do it would
be to write in assembly using .code16 to instruct the code should be 16-bits.
Example:
.file "hello.s"
.text
.code16 /* Tel GAS to use 16-bit instructions. */
.globl start, _start
start:
_start:
movb $0x48, %al
...
This is needed by e.g. boot loaders such as GRUB and LILO for the code
present at MBR on your hard drive.
Reason for this is that when your computer boots up the CPU is in a special
mode where it does not have 32-bit but max 16-bit instructions AKA
Real Mode.
In short what happens is that BIOS do a hardware test, then it
loads the first 512 bytes of your boot disk into memory and leaves control to
that code starting at address 0. This code in turn locates where next stage
of files reside, load these into memory and continue executing finally entering
Protected Mode where you have normal 32-bit mode.
| default wordsize in UNIX/Linux |
1,304,322,391,000 |
So when I run Raspbian (basically a ARM Debian derivative with LXDE), I can install any normal package using aptitude. But if, for example, I wanted to download a .deb file, I would have to choose 32-bit or 64-bit and download that, and try to run it on Raspbian (it wouldn't work).
Why does installing packages from the official repositories work on ARM systems?
Why is it not incompatible?
I'm also a little confused about the difference between hardware and software bits. ARM is hardware, right?
|
TL,DR: if you're only offered a choice of “32-bit” and “64-bit”, neither is right for a Raspberry Pi (or any other ARM-based computer). You need a package for ARM, and the right one to boot, which is armhf.
“32-bit” and “64-bit” are only one of the characteristics of a processor architecture. Many processor families come in both 32-bit and 64-bit variants (x86, ARM, Sparc, PPC, MIPS, …). Debian alone has 23 official binary distributions for different processor characteristics and different software characteristics.
You need to install a package which matches the ABI for your system. The ABI (application binary interface) includes the processor type (more precisely, its instruction set), but also other characteristics related to the ways programs interact. In particular, when a program makes a call to code that is in a library, the ABI determines how the arguments to the library function are passed (in registers or on the stack).
In the PC world, there are two instruction sets (up to minor variations that do not matter):
IA-32, a variant of x86, commonly known as i386 (the name used by Debian) or i686 (which, like IA-32, are generations of the x86 architecture series);
x86-64, also known as x64 or amd64 (the name used by Debian) (not to be confused with IA-64 which is completely different).
Both Intel and AMD make processors that implement the x86 and x86-64 instruction sets. Modern PCs have processors that support both the x86-64 and the x86 instruction sets; older PCs have processors that support only x86. Because the x86 instruction set uses 32-bit registers and the x86-64 instruction set uses 64-bit registers, and because for each instruction set there is a single ABI used by all Linux installations¹, these are often described as just “32-bit” or “64-bit”. In a PC context, “32-bit” means “x86” and “64-bit” means “x86-64”.
ARM processors have a completely different instruction set. You cannot install an x86 or x86-64 package on an ARM system. You need a package for ARM, for the correct instruction set, and more generally for the correct ABI. There are no major 64-bit distributions for 64-bit ARM processors yet, because the ARMv8 architecture revision which introduces a 64-bit instruction set is still very new and not commonly available. There are however multiple 32-bit ABIs, which assume the existence of different processor features and use different versions of the argument-passing convention. The main ARM ABIs used on Linux are:
armel, based on the ARM EABI version 2 (known as “ARM EABI” or “EABI” for short), in its little-endian incarnation;
armhf, which is a variant of armel that takes advantage of some features of newer ARM CPUs, in particular hardware floating-point support.
All devices that support armhf also support armel; however a given system installation must be consistent. Raspbian uses armhf (in fact, it started out as a port of Debian's armel to armhf, back when armhf was a new thing).
¹ At least for mainstream distributions. There are embedded distributions that have several x86 binary releases, with packages compiled against different versions of the standard C library (glibc, dietlibc, uclibc, …).
| 32-bit vs 64-bit vs ARM in regards to programs and OSes |
1,304,322,391,000 |
I'm looking for a tool that can create both graphical files as well as text based ASCII representations of my system's CPU & motherboard architectures.
|
I recently came across this tool called lstopo that's bundled in the package hwloc (at least on Fedora 19, that's where it was located). This tool seems to have everything one would want and more.
Here are a couple of samples. The first is a graphical representation that is outputted when you run the tool without any switches.
$ lstopo
PNG screenshot
$ lstopo --output-format txt -v --no-io --no-legend > lstopo.txt
ASCII screennshot
┌────────────────────────────────────────┐
│ Machine (7782MB) │
│ │
│ ┌────────────────────────────────────┐ │
│ │ Socket P#0 │ │
│ │ │ │
│ │ ┌────────────────────────────────┐ │ │
│ │ │ L3 (3072KB) │ │ │
│ │ └────────────────────────────────┘ │ │
│ │ │ │
│ │ ┌──────────────┐ ┌──────────────┐ │ │
│ │ │ L2 (256KB) │ │ L2 (256KB) │ │ │
│ │ └──────────────┘ └──────────────┘ │ │
│ │ │ │
│ │ ┌──────────────┐ ┌──────────────┐ │ │
│ │ │ L1d (32KB) │ │ L1d (32KB) │ │ │
│ │ └──────────────┘ └──────────────┘ │ │
│ │ │ │
│ │ ┌──────────────┐ ┌──────────────┐ │ │
│ │ │ L1i (32KB) │ │ L1i (32KB) │ │ │
│ │ └──────────────┘ └──────────────┘ │ │
│ │ │ │
│ │ ┌──────────────┐ ┌──────────────┐ │ │
│ │ │ Core P#0 │ │ Core P#2 │ │ │
│ │ │ │ │ │ │ │
│ │ │ ┌──────────┐ │ │ ┌──────────┐ │ │ │
│ │ │ │ PU P#0 │ │ │ │ PU P#2 │ │ │ │
│ │ │ └──────────┘ │ │ └──────────┘ │ │ │
│ │ │ ┌──────────┐ │ │ ┌──────────┐ │ │ │
│ │ │ │ PU P#1 │ │ │ │ PU P#3 │ │ │ │
│ │ │ └──────────┘ │ │ └──────────┘ │ │ │
│ │ └──────────────┘ └──────────────┘ │ │
│ └────────────────────────────────────┘ │
└────────────────────────────────────────┘
But these represent just the basics of what you can do with this tool. If you were to consult the man page nearly every aspect of the output can be customized and disabled if needed to change the output to suit whatever needs you may have.
Just to give you a sense of what you can enable and disable.
--no-caches
Do not show caches.
--no-useless-caches
Do not show caches which do not have a hierarchical impact.
--no-icaches
Do not show Instruction caches, only Data and Unified caches are
displayed.
--merge
Do not show levels that do not have a hierarchical impact.
--restrict <cpuset>
Restrict the topology to the given cpuset.
--restrict binding
Restrict the topology to the current process binding. This option
requires the use of the actual current machine topology (or any
other topology with
--no-io
Do not show any I/O device or bridge. By default, common devices
(GPUs, NICs, block devices, ...) and interesting bridges are
shown.
The list goes on, this is just to give you a sense.
Portable Hardware Locality (hwloc)
The project, hwloc, that provides this tool and many others is part of the Open MPI Project. The hwloc project is described as follows:
The Portable Hardware Locality (hwloc) software package provides a portable abstraction (across OS, versions, architectures, ...) of the hierarchical topology of modern architectures, including NUMA memory nodes, sockets, shared caches, cores and simultaneous multithreading. It also gathers various system attributes such as cache and memory information as well as the locality of I/O devices such as network interfaces, InfiniBand HCAs or GPUs. It primarily aims at helping applications with gathering information about modern computing hardware so as to exploit it accordingly and efficiently.
The lstopo tool is one of many tools available through this project.
| Is there a tool that I can use to create a diagram of my system's architecture? |
1,304,322,391,000 |
man uname
-m, --machine print the machine hardware name
-i, --hardware-platform print the hardware platform or "unknown"
What exactly is meant by hardware platform here and how is it different from the "machine hardware name"? I found some related questions on SE but there seems to be some contradictions among the accepted answers. Where can I find accurate information about this nomenclature?
|
A bit more info in info uname:
`-i'
`--hardware-platform'
Print the hardware platform name (sometimes called the hardware
implementation). Print `unknown' if the kernel does not make this
information easily available, as is the case with Linux kernels.
`-m'
`--machine'
Print the machine hardware name (sometimes called the hardware
class or hardware type).
Basically classification types - you can have different hw implementations (-i) but with/in the same hw class (-m).
Used, for example, to differentiate between kernel modules shared by the same hw class and modules specific to a certain hw implementation.
| Meaning of hardware platform in uname command ouput |
1,304,322,391,000 |
I see the term i386 instead of x86 in many places related to Linux. As of my knowledge, they are not interchangeable. x86 is a family of instruction set architectures where i386 is a specific one of the x86 processors. But why do Linux world uses the term i386 instead of x86 ?
References:
x86 | Wikipeadia
Intel 80386 | Wikipeadia
|
i386, or 80386, was the first 32-bit processor. When it was introduced, the word i386 is started to be using in many places, including in OSs and compilers, which made it impossible or very difficult to change later.
Even after the introduction of other advanced x86 processors, including the 486 and 586, many manufacturers didn't bother to change the label i386 and started to use it as an alias for 32-bit x86 processor.
| Why do Linux world use the term i386 instead of x86? |
1,304,322,391,000 |
Building deb packages optimized for arbitrary CPU instructions, how do I put the CPU instructions as a dependency on the deb packages ?
The package isn't intended for mass distribution, but I don't want to have people confused with crashes because their CPU is too old for my builds.
|
I'm not sure the dpkg format, itself, can do what you require.
However you can make use of preinstall scripts. In this you can test to see if the CPU is of the right level and abort if it is not good enough. In this way your package won't install.
The preinst script is part of the control section of a pkg; you can read about it at https://www.debian.org/doc/manuals/debian-faq/ch-pkg_basics.en.html
These are sometimes called preinstall sanity scripts. If they end with a non-zero exit code then the package install fails.
Your preinst script could be as simple as
#!/bin/sh
set -e
flags=`grep ^flags /proc/cpuinfo | head -1`
if [ -z "`echo $flags | grep sse4`" ]
then
echo Can only run on machines with SSE4 instructions. Install failed
exit 1
fi
exit 0
| How do I put hardware dependencies on .deb packages? |
1,304,322,391,000 |
On my computer, uname -m prints x86_64 as output. What is the list of possible values that this command could output? I intend to use this command from a dynamic runtime to check the CPU architecture.
|
I’m not aware of a definitive list of possible values; however there is a list of values for all Debian architectures, which gives good coverage of the possible values on Linux: aarch64, alpha, arc, arm, i?86, ia64, m68k, mips, mips64, parisc, ppc, ppc64, ppc64le, ppcle, riscv64, s390, s390x, sh, sparc, sparc64, x86_64 (there are other possible values, but they’re not supported by Debian; I’m ignoring the Hurd here). Another source of information is the $UNAME_MACHINE matches in config.guess; this isn’t limited to Linux.
Note that uname -m reflects the current process’ personality, and the running kernel’s architecture; not necessarily the CPU architecture. See Meaning of hardware platform in uname command ouput for details.
| `uname -m` valid values |
1,304,322,391,000 |
I've previously added i386 support in order to install wine32 with:
sudo dpkg --add-architecture i386
Now I don't need wine32 anymore and removed it, then wanted to remove the i386 architecture again, but it states:
sudo dpkg --remove-architecture i386
dpkg: error: cannot remove architecture 'i386' currently in use by the database
I assume that I could remove all i386 packages shown by dpkg --list | grep i386, but I'm not sure whether this may impair my systems functionality or not.
My question is, if its safe to remove the listed i386 packages, considering that I removed wine32.
Or on the other hand if it may interfere with my system in any way, if I keep the i386 architecture.
Debian Stretch 4.1.0-2-amd64
|
On a debian amd64 system, the i386 architecture is an optional extra. No i386 packages are required for the system to function.
If you are not using any 32-bit programs, you can safely remove all :i386 packages, and the i386 architecture.
Personally, I wouldn't bother removing them unless disk space was extremely tight. The i386 packages do no harm and you may want to run 32-bit software again in the future.
| Added i386 support for wine, removed it now I can't remove the architecture |
1,304,322,391,000 |
Is there a (relatively) simple way to test if an executable not only exists, but is valid?
By valid, I mean that an x86_64 Mach-O (OS X) executable will not run on an ARM Raspberry Pi. However, simply running tool-osx || tool-rpi works on OS X, where the executable runs, but does not fall back to tool-rpi when the x86_64 fails.
How can I fall back to another executable when one is invalid for the processor architecture?
|
Rather than testing for a valid executable, it's probably best to test what the current architecture is, then select the proper executable based on that. For example:
if [ $(uname -m) == 'armv6l' ]; then
tool-rpi
else
tool-osx
fi
However, if testing the executable is what you really want to do, GNU file can tell you the architecture of an executable:
user@host:~$ file $(whereis cat)
ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.26, BuildID[sha1]=0x4e89fd8f129f0a508afa325b0f0f703fde610971, stripped
| Test if valid executable |
1,304,322,391,000 |
How can using huge page improve performance?
I have read that huge pages improve performance by reducing TLB lookups and reducing the size of the page table. Can someone tell me how this helps with performance?
Is this like if I have an application that uses 4 pages of virtual memory (4*4kb=16kb) then each page is mapped directly to some physical memory location, but if we use huge pages of 16kb size then instead of mapping 4 pages it just needs to map one, thus reducing page table size, and chances of swapping this out to disk is less and hence longer TLB cache?
|
Serge answered it. The TLB has a fixed number of slots. If a virtual address can be mapped to a physical address with information in the TLB, you avoid an expensive page table walk. But the TLB cannot cache mappings for all pages.
Therefore, if you use larger pages, that fixed number of virtual to physical mappings covers a greater overall address range, increasing the hit ratio of the TLB (which is a cached mapping).
| Huge page and performance improvemnt |
1,304,322,391,000 |
The man for cpuset doesn't seem to clearly list how to figure out which numbers map to which processing units. My current machine has two Intel Xeon E5645s, each of which has 6 cores and hyperthreading enabled, so I have 24 total processing units I can refer to with cpusets. My challenges are 1) determine which cpuset ID numbers map to which processor 2) determine which cpuset id numbers are paired (e.g. siblings on a core)
Are the numbers that lscpu outputs the same identifiers I should use to refer to cpu set processors? If so, it seems the numbers are alternated here, and this answers (1) with "evens are one processor, odds are the other processor", but I'm not sure if I'm reading it correctly.
$ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 44
Stepping: 2
CPU MHz: 2393.964
BogoMIPS: 4788.01
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 12288K
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23
lstopo from the hwloc package seems to show me the answer to (2), and if I'm reading the man page correctly the P#... bits are the identifier "used by the OS", which leads me to believe these are the ones I need to pass to cpu sets. So limiting a process to cpus 0 and 12 would be allowing use of two threads on the same core, while limiting it to cpus 0 and 2 would be two threads on two different cores. Does that seem correct?
$ lstopo
Machine (35GB)
NUMANode L#0 (P#0 18GB) + Socket L#0 + L3 L#0 (12MB)
L2 L#0 (256KB) + L1d L#0 (32KB) + L1i L#0 (32KB) + Core L#0
PU L#0 (P#0)
PU L#1 (P#12)
L2 L#1 (256KB) + L1d L#1 (32KB) + L1i L#1 (32KB) + Core L#1
PU L#2 (P#2)
PU L#3 (P#14)
L2 L#2 (256KB) + L1d L#2 (32KB) + L1i L#2 (32KB) + Core L#2
PU L#4 (P#4)
PU L#5 (P#16)
L2 L#3 (256KB) + L1d L#3 (32KB) + L1i L#3 (32KB) + Core L#3
PU L#6 (P#6)
PU L#7 (P#18)
L2 L#4 (256KB) + L1d L#4 (32KB) + L1i L#4 (32KB) + Core L#4
PU L#8 (P#8)
PU L#9 (P#20)
L2 L#5 (256KB) + L1d L#5 (32KB) + L1i L#5 (32KB) + Core L#5
PU L#10 (P#10)
PU L#11 (P#22)
NUMANode L#1 (P#1 18GB) + Socket L#1 + L3 L#1 (12MB)
L2 L#6 (256KB) + L1d L#6 (32KB) + L1i L#6 (32KB) + Core L#6
PU L#12 (P#1)
PU L#13 (P#13)
L2 L#7 (256KB) + L1d L#7 (32KB) + L1i L#7 (32KB) + Core L#7
PU L#14 (P#3)
PU L#15 (P#15)
L2 L#8 (256KB) + L1d L#8 (32KB) + L1i L#8 (32KB) + Core L#8
PU L#16 (P#5)
PU L#17 (P#17)
L2 L#9 (256KB) + L1d L#9 (32KB) + L1i L#9 (32KB) + Core L#9
PU L#18 (P#7)
PU L#19 (P#19)
L2 L#10 (256KB) + L1d L#10 (32KB) + L1i L#10 (32KB) + Core L#10
PU L#20 (P#9)
PU L#21 (P#21)
L2 L#11 (256KB) + L1d L#11 (32KB) + L1i L#11 (32KB) + Core L#11
PU L#22 (P#11)
PU L#23 (P#23)
HostBridge L#0
PCIBridge
PCI 14e4:163a
Net L#0 "eth0"
PCI 14e4:163a
Net L#1 "eth1"
PCIBridge
PCI 102b:0532
PCI 8086:2921
Block L#2 "sda"
PCI 8086:2926
|
use
cat /proc/cpuinfo
there you will get each hyperthread listed like this:
processor : 0
physical id : 0
core id : 1
"processor" stands for the "logical processor", what is presented to the operating system as a processor. If you have hyper threading switched on, you will see two "logical processors" per core. The "physical id" will be the processor that you can touch (you have two of them).
Here is a listing from my 1-processor 4-core system with hyperthreading:
# cat /proc/cpuinfo|egrep "processor|core id|physical id"
processor : 0
physical id : 0
core id : 0
processor : 1
physical id : 0
core id : 1
processor : 2
physical id : 0
core id : 2
processor : 3
physical id : 0
core id : 3
processor : 4
physical id : 0
core id : 0
processor : 5
physical id : 0
core id : 1
processor : 6
physical id : 0
core id : 2
processor : 7
physical id : 0
core id : 3
| Finding the CPU id numbers to use with cpu sets |
1,304,322,391,000 |
When I do cat /proc/interrupts on my multicore x86_64 desktop PC (kernel 3.16) I see this:
0: 16 0 IO-APIC-edge timer
LOC: 529283 401319 Local timer interrupts
When I do cat /proc/interrupts on my multicore x86_64 laptop (kernel 3.19) I see this:
0: 1009220 0 IO-APIC-edge timer
LOC: 206713 646587 Local timer interrupts
When I saw this difference, I asked myself what the difference between those two is?
I hope someone can explain this rather thoroughly, the explanation given here is not very detailed and does not explain why my desktop PC does not use timer, but my laptop does.
|
Under your apparently x86_PC architecture :
IRQ 0 is the interrupt line associated to the first timer (Timer0) of the Programmable Interval Timer. It is delivered by the IO-APIC to the boot cpu (cpu0) only.
This interrupt is also known as the scheduling-clock interrupt or
scheduling-clock tick or simply tick:
If the NO_HZ kernel configuration knob is not set (or under linux kernel versions < 3.10) This interrupt would be programmed to fire periodically at a HZ frequency.
If NO_HZ is set then the PIT will work in its one-shot mode
Used at early boot times, it can still serve as the scheduling clock tick and for updating system time unless some better (*1) clocksource is found available.
It will anyway serve for cpu time accounting if TICK_CPU_ACCOUNTING is set as part of the kernel configuration.
LOC are the interrupts associated with the local APIC timer.
which should be enabled to fire after some tedious initialization. (see hereabove link)
Then, depending on the cpu hardware capabilities to keep this clocksource stable in iddle times and depending on kernel's configuration and boot command line parameters it will replace the PIC interrupt for triggering miscellaneous scheduler's operations, precise cpu time accounting and system time keeping.
| What is the difference between Local timer interrupts and the timer? |
1,304,322,391,000 |
I sometimes develop code on ARM hardware (Cubietruck or Rpi) as their dire slowness helps me to find code bottlenecks more easily than on amd64. However I want Vim to remain responsive so I need to turn a few things off depending on which architecture I'm running on (cursorline in particular is very resource intensive). How can I detect underlying architecture from my vimrc?
|
What about if you use system() to call uname -m and check your Kernel architecture?
if system("uname -m") == "armv7l\n"
set foo
set bar
endif
Fix suggested at the comments to add \n at the comparsion string, since uname -m will add a newline after the command is executed.
| Can I detect instruction set architecture in vimrc? (ARM vs x86) |
1,304,322,391,000 |
I am trying to find out the cache mapping scheme for all the levels of class of a Linux server, including associativity, however I do not have root access. I would just use dmidecode for this but you need root access. Is there another way of getting the same information without root?
|
lscpu, in util-linux, describes the cache layout without requiring root:
[...]
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 8192K
The files in /sys/devices/system/cpu/cpu*/cache/ should contain all the information you’re looking for, including associativity, and are readable without being root, but they’re a little harder to parse:
grep . /sys/devices/system/cpu/cpu*/cache/index*/*
(I got this from Where is the L1 memory cache of Intel x86 processors documented?)
| How to get all CPU cache information without root access |
1,304,322,391,000 |
In the Debian download CD/DVD images page they have different ISO's for the different instruction set architectures. How do I know what is the ISA of a CPU before I buy one? I know about using the commands
cat /proc/cpuinfo
and
lscpu
but these are only good after getting the CPU and running these commands on a Linux based OS. How do I find out this information before getting the CPU?
For example the CPU:
Intel(r) core(tm) i5-6300hq cpu @ 2.30ghz
In the official intel website they show the ISA is "64 bits". But nothing specific as mentioned in the debian website:
amd64 / arm64 / armel / armhf / i386 / mips64el /mipsel / ppc64el / s390x / multi-arch
Can someone tell me how they would go about finding this information?
|
If you don't have the cpu, I presume you are buying one or something.
If that is the case, then you can find out everything about the prospective cpu you are going to buy by looking up the data by the model number of the cpu you are looking at.
You can guess the architecture by the manufacturer, as most manufacturers (e.g., Intel) only produce a small number of architectures (for intel, currently, AMD64 aka x86-64, but i386 and IA-64 in the past).
Typically the model number of the cpu will allow you to look up even more detailed information. Wikipedia typically has well collected data in tables on this, but you can also typically find this on the manufacturers' websites.
For your specific example i5-6300hq, a google search finds a reference to it in the wikipedia page https://en.wikipedia.org/wiki/List_of_Intel_Core_i5_processors (with a specific table entry for your example further down) which in turn calls this an "Intel Core" processor, which links to https://en.wikipedia.org/wiki/Intel_Core
In the side bar on this page, it lists x86-64, linked to https://en.wikipedia.org/wiki/X86-64 and the first line of that page lists AMD64.
Each of these pages has abundant details on what each classification means and how it relates to similar cpus, including the outdated i386 and IA-64.
| How to find out what is the Instruction Set Architecture (ISA) of a CPU? |
1,304,322,391,000 |
my platform:
SOC = STM32H743 (ARMv7E-M | Cortex-M7)
Board = Waveshare CoreH7XXI
Linux Kernel = 5.8.10 (stable 2020-09-17)
initial defconfig file = stm32_defconfig
rootfs = built using busybox | busybox compiled using arm-linux-gnueabihf-gcc
I've created rootfs by following this guide.
my kernel cannot execute any file even the init file >>> /linuxrc or /sbin/init.
for making sure that the problem is not from busybox files, I wrote a C helloworld program with -mcpu=cortex-m7 flag and compiled it with arm-linux-gnueabi-gcc but again the kernel paniced and throwed the -8 error (Exec format error).
my busybox files are all linked to the busybox binary and the binary is correctly compiled for 32bit arm:
$ readelf -A bin/busybox
Attribute Section: aeabi
File Attributes
Tag_CPU_name: "Cortex-M7"
Tag_CPU_arch: v7E-M
Tag_CPU_arch_profile: Microcontroller
Tag_ARM_ISA_use: Yes
Tag_THUMB_ISA_use: Thumb-2
Tag_ABI_PCS_wchar_t: 4
Tag_ABI_FP_rounding: Needed
Tag_ABI_FP_denormal: Needed
Tag_ABI_FP_exceptions: Needed
Tag_ABI_FP_number_model: IEEE 754
Tag_ABI_align_needed: 8-byte
Tag_ABI_align_preserved: 8-byte, except leaf SP
Tag_ABI_enum_size: int
Tag_CPU_unaligned_access: v6
the kernel error:
[ 0.925859] Run /linuxrc as init process
[ 0.943257] Kernel panic - not syncing: Requested init /linuxrc failed (error -8).
[ 0.950654] ---[ end Kernel panic - not syncing: Requested init /linuxrc failed (error -8). ]---
my helloworld program:
$ readelf -A hello
Attribute Section: aeabi
File Attributes
Tag_CPU_name: "7E-M"
Tag_CPU_arch: v7E-M
Tag_CPU_arch_profile: Microcontroller
Tag_ARM_ISA_use: Yes
Tag_THUMB_ISA_use: Thumb-2
Tag_ABI_PCS_wchar_t: 4
Tag_ABI_FP_rounding: Needed
Tag_ABI_FP_denormal: Needed
Tag_ABI_FP_exceptions: Needed
Tag_ABI_FP_number_model: IEEE 754
Tag_ABI_align_needed: 8-byte
Tag_ABI_align_preserved: 8-byte, except leaf SP
Tag_ABI_enum_size: int
Tag_CPU_unaligned_access: v6
the kernel error:
[ 1.189550] Run /hello as init process
[ 1.198670] Kernel panic - not syncing: Requested init /hello failed (error -8).
[ 1.205977] ---[ end Kernel panic - not syncing: Requested init /hello failed (error -8). ]---
Why the kernel can't execute binaries?
|
the problem is that you are compiling it in a normal static elf format. you should compile it as an FDPIC-ELF executable (because you need a position independent executable (FDPIC) due to the lack of MMU).
FDPIC ELF is not ET_EXEC type. it is ET_DYN (it means it's shared) type and it is loaded by the Linux dynamic loader.
just add a -mfdpic flag to it and turn off the built static binary in the busybox's kconfig menu.
note that -mfdpic flag is on by default in arm-uclinux-fdpicabi toolchains.
| kernel cannot execute binaries (error -8) |
1,304,322,391,000 |
I want to know the difference between architecture and platform in Linux kernel. When I had downloaded the latest kernel tarball, observed that a directory named with arch, it contains different names of processors & inside to any one processor directory again there is a directory called platform.
For example:-
/arch/powerpc is a directory under arch in Linux kernel & /arch/powerpc/platforms is a directory under powerpc.
So, what does this actually mean?
Can anyone explain this in detail, referring from hardware perspective to software perspective, please?
|
The architecture is the processor type. There are only a relatively small number of architectures. All processor types that execute the same user code are classified as the same architecture, even though there may be several different ways to compile the kernel; for example x86 and powerpc are a single architecture but the kernel can be compiled using the 32-bit instruction set or the 64-bit instruction set (and a 32-bit kernel can execute only 32-bit programs, while a 64-bit kernel can execute both 32-bit and 64-bit programs).
The platform describes everything else about the hardware that Linux cares about. This includes variations on the way booting works, on how some peripherals such as a memory controller, a power management coprocessor, cryptographic accelerators and so on work, etc. Whether features are classified according to a platform or are separate drivers or compilation options depends partly on how fundamental the feature is (i.e. how difficult it is to isolate the code that uses it) and partly on how the person who coded support for it decided to do it.
| Difference between architecture and platform in linux kernel |
1,304,322,391,000 |
I have a system with an unrecoverable /usr partition. Terrified the drives are going bad, I've got it booted into a LiveCD environment, and I can't remember what the install architecture was, the most I have is it's CentOS 5.5.
Because of the Live environment, none of the standard methods work such as uname or checking /proc.
Here is the kernel that was used: vmlinuz-2.6.18-194.32.1.el5
Is there anything I can scan the file for to figure out if the architecture is 32 or 64 bit?
Or something else I can look at on the file system? Nothing in /usr will work because that partition is now dead.
|
file vmlinuz-2.6.18-194.32.1.el5 will tell you what architecture the kernel was compiled for. If there's a file /boot/config-2.6.18-194.32.1.el5, it will give more information about the kernel compilation options, including the processor architecture.
ls /lib* will tell you what architecture the userland supports. For example, if there's /lib/ld-linux.so.2 on an x86 system, then you have at least basic 32-bit support. If there's /lib/ld-linux-x86-64.so.2 or /lib64/ld-linux-x86-64.so.2 then you have at least basic 64-bit (amd64) support. file /bin/ls will tell you what architecture utilities are compiled from (usually, the whole OS userland is compiled for one architecture, perhaps with additional libraries for another ABI for custom applications).
The kernel and the userland aren't always the same architecture. Amd64 kernels can run 32-bit user programs (but not the converse). If you wanted to know whether you had a 32-bit or 64-bit edition of CentOS, check whether /bin/ls is a 32-bit or 64-bit program.
| Determining Linux architecture from files |
1,304,322,391,000 |
In start_kernel(), one of the first things the kernel does is run setup_arch(). setup_arch() is defined for every supported architecture, so it is passed a pointer to the appropriate command line.
How is this pointer initialized, and how and when does the kernel get the architecture of the computer?
|
A given kernel is built for a single architecture, so it has a single implementation of setup_arch. The generic start_kernel calls that, but it doesn’t pass an initialised pointer to the command line, it passes a pointer to a pointer to the command line, and it’s part of setup_arch’s job to initialise that pointer.
For example, x86 has a global command_line variable, and its setup_arch stores its address in the pointer provided by start_kernel.
So the kernel effectively gets the architecture of the computer when it’s built.
| How does the Linux kernel know the computer architecture? |
1,304,322,391,000 |
I need to set the arch option in debootstrap. So I did some research and read the manual.
After reading the manual I see that the section on the options simply says
--arch=ARCH
Implying that I should know the correct syntax for the architecture I need.
I don't. I need 64 bit architecture.
I know that "i386" can be used for 32bit architecture.
What should I set the --arch option to if I want 64 bit architecture?
Or more generally what would the range of options be?
I could guess (but don't know and can't determine) that potentially the range or arch options depends on the OS being booted. In my case its a version of ubuntu that I know should work in 64bit. So the question becomes how would I determine the 64bit architecture option syntax?
I could further guess (but again don't know and can't determine) that the option syntaxes are actually supplied by the booted OS and if I knew where to look I could figure it out. In which case, where would I look?
|
The possible values are the codenames of the architectures supported by the target operating system. For Ubuntu, check the architectures for which the C library is built: for 64-bit x86, the appropriate value is amd64.
On systems with dpkg,
dpkg --print-architecture
will show the current architecture (which is the default architecture for debootstrap).
debootstrap is also capable of installing a system for any supported architecture, not only the host system’s architecture; see its --foreign option. If necessary it can use Qemu to emulate the target architecture.
| What are the possible options for the --arch option in debootstrap? |
1,304,322,391,000 |
We have physical Linux machine with 16 cpus
lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 16
We want to disable 14 cpus on that machine , so its actually like we have linux machine with only 2 cpu
In order to achieve this , I did the following
echo 0 > /sys/devices/system/cpu/cpu15/online
echo 0 > /sys/devices/system/cpu/cpu14/online
echo 0 > /sys/devices/system/cpu/cpu13/online
echo 0 > /sys/devices/system/cpu/cpu12/online
echo 0 > /sys/devices/system/cpu/cpu1/online
echo 0 > /sys/devices/system/cpu/cpu11/online
echo 0 > /sys/devices/system/cpu/cpu10/online
echo 0 > /sys/devices/system/cpu/cpu9/online
echo 0 > /sys/devices/system/cpu/cpu8/online
echo 0 > /sys/devices/system/cpu/cpu7/online
echo 0 > /sys/devices/system/cpu/cpu6/online
echo 0 > /sys/devices/system/cpu/cpu5/online
echo 0 > /sys/devices/system/cpu/cpu4/online
echo 0 > /sys/devices/system/cpu/cpu5/online
echo 0 > /sys/devices/system/cpu/cpu4/online
and then also run mpstat
and we get
08:26:13 AM CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle
08:26:13 AM all 0.34 0.00 0.09 0.04 0.00 0.00 0.00 0.00 0.00 99.53
08:26:13 AM 0 0.42 0.00 0.12 0.01 0.00 0.00 0.00 0.00 0.00 99.45
08:26:13 AM 1 0.37 0.00 0.10 0.01 0.00 0.00 0.00 0.00 0.00 99.52
08:26:13 AM 2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
08:26:13 AM 3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
08:26:13 AM 4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
08:26:13 AM 5 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
08:26:13 AM 6 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
08:26:13 AM 7 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
08:26:13 AM 8 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
08:26:13 AM 9 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
08:26:13 AM 10 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
08:26:13 AM 11 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
08:26:13 AM 12 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
08:26:13 AM 13 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
08:26:13 AM 14 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
08:26:13 AM 15 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
We can see that only 2 cpu’s are online
But I don’t sure if this approach is really works and I need advice
For example how to be sure that any PID's will not use the other 14 cpu’s that are disabled?
Let me know if my procedure , disable the 14 cpu’s , and process will use only 2 cpus
|
This might or might not depending on the application.
If the application simply uses APIs to poll the number of available cores, it might not work because the Linux kernel might return all the cores.
However disabling CPU cores in BIOS must work - it depends on your BIOS implementation, so please consult with your motherboard documentation.
If I were you, I'd approach this issue differently: I'd run the app in a VM and allocate the required number of cores to it. This way your host OS will still be able to use the remaining cores.
Lastly you don't need to run echo 14 times.
Here's a simpler version for bash:
echo 0 | sudo tee /sys/devices/system/cpu/cpu{2..15}/online
Lastly make sure you leave two physical cores instead of a single core with HT. To learn your CPU topology run:
lscpu -p
Normally the Linux kernel first sees physical cores, then HT/SMT cores but I'm not sure it's always the case.
| rhel + how to disable CPU's on my machine |
1,304,322,391,000 |
I'm just learning a bit about lower-level languages and I've noticed that gcc you can specify -march and -mtune parameters to optimise the software for particular CPU families.
But I've also found people saying that building a program from source won't make it noticeably faster than downloading the binary. Surely being able to have the software optimised for the CPU in your system would provide a notable speed boost, especially in software like ffmpeg which uses fairly microarchitecture-dependent features such as AVX?
What I'm wondering is, are the binaries on package managers somehow optimised for multiple microarchitectures? Does the package manager download binaries specific to my system's microarchitecture?
|
Distribution packages are built with reference to a pre-determined baseline (see Debian’s architecture baselines for example). Thus, in Debian, amd64 packages target generic x86-64 CPUs, with SSE2 but not SSE3 or later; i386 packages target generic i686 CPUs, without MMX or SSE. In general, the compiler defaults are used, so tuning might evolve as the compiler itself evolves.
However packages where CPU-specific optimisations provide significant benefit can be built to take advantage of newer CPUs. This is done by providing multiple implementations rather than relying on compiler optimisations, and choosing between them at runtime: the packaged software detects the running CPU and adjusts the code paths it uses to take advantage of it (see ffmpeg’s libswscale/x86/swscale.c for example). On some architectures, ld.so itself helps with this: it can automatically load an optimised library if it’s available, e.g. on an i386-architecture system running on an SSE-capable CPU.
Most if not all package managers are oblivious to all this; they download a given architecture’s package and install it, without regard for the CPU running the system.
| What microarchitecture are packages on apt/yum typically built/tuned for? |
1,304,322,391,000 |
The context is the following:
Additionally, the following rule is required in systems supporting the 32-bit syscall table (such as i686 and x86_64).
I'm trying to figure out what this means, and how I can check whether my system needs this rule. They reference the chown and chown32 commands/(syscalls, maybe), and continue to discuss system architecture around it. I only care about Linux but I don't only care about x86.
In the output of lscpu, you have the field for CPU op-mode which looks like:
CPU op-mode(s): 32-bit, 64-bit
for a x86_64 processor with dual architecture support. I presume that would show 32 or 64 only on systems not capable of interpreting the other's instructions.
My dilemma is figuring this out programmatically (I'll eventually write it as Python) on a legacy system without lscpu on it. I've examined this question where they talk about finding 64-bit compatibility, but I'm struggling to make use of these in the opposite use case.
So to summarize why this is a problem so far:
lscpu is not the machine
I've ran sudo find / -iregex .*lscpu.* to make sure
/proc/cpuinfo explains 64-bit compatibility through flags ending in _lm (not 32-bit compatibility, as far as I'm aware)
uname is insufficient: it displays the primary architecture, and while it's safe to assume x86_64 undoubtedly supports 32-bit too, mapping known architectures to compatibility doesn't seem the most reliant or efficient way of solving this particular problem
hwinfo is not on the machine
getconf LONG_BIT checks 64-bit compatibility
lshw is not on the machine
It's possible I've overlooked something and equally possible I don't understand enough about the subject as a programmer. Could someone please help me understand how to programmatically—meaning some method of obtaining an exact or parsable output—check if my system has 32-bit compatibility?
|
I take it you’re following the RHEL 5 Security Technical Implementation Guide.
“Does my Linux system support the 32-bit syscall table?” is a very interesting question and, as Gilles mentions, not one which has conclusively been addressed on the site. It’s also a tough question to answer.
I’ll start by reducing it to the STIG context, i.e. “Does my Linux system support the i386 syscall table?” (We’ll revisit the more general issue later.) You can’t obtain a definitive list of supported syscalls from a running kernel, but in this case we don’t need to: all we need to do is look for the syscall entry points. The i386 entry points are conveniently named: the externally-visible functions used by both the native 32-bit calls and the 64- to 32-bit emulation layer are do_fast_syscall_32 and do_int80_syscall_32. The best way to check for those is to look for them in /proc/kallsyms (I’m hoping there isn’t another STIG rule which forbids that...). If they’re present, then the current kernel supports i386 system calls, and you need the lchown32 audit rule.
Reading through the other answers on this sort of topic here, you’ll gather that a typical way of testing for system call support on a running system is to try to call the system call. When auditing a system that might not be appropriate since it should trigger an audit rule. It also can result in false negatives when auditing since it typically relies both on the kernel supporting the relevant system call, and the system providing the necessary framework.
Using the results of lscpu and other similar tools is also misleading since they report the installed CPU’s capabilities, not the system’s. For example, lscpu hard-codes equivalencies: lm, zarch, or sun4[uv] in the CPU flags tell it that 32- and 64-bit support is available, which it is from the CPU’s perspective, but lscpu doesn’t determine whether the rest of the system supports it too (nor should it).
Revisiting the more general issue, “Does my Linux system support the 32-bit syscall table?”, determining the answer will always depend on the architecture. If we try to look at system calls to determine the answer, we need to take into account the system call history on the architecture; for example, chown32 and siblings aren’t necessarily supported on 32-bit architectures. Likewise, looking for entry points is architecture-dependent.
Thus I don’t think there is a general answer to your question; answers have to take into account at least the target architecture.
| Does my Linux system support the 32-bit syscall table? |
1,304,322,391,000 |
I'm working on setting up a minimal Debian install on a USB stick, and am just trying to wrap my head around debootstrap and differences in architectures. I want to create a system to run on AMD64 (AMD Sempron 145) from i686 (Intel Atom N450). As far as I understand, the atom is a 64 bit processor, so can I just do this:
debootstrap --arch=amd64 wheezy /mnt/foobar
Or do I have to follow one of the more complicated cross-debootstrap procedures?
extra info:
$ lscpu
Architecture: i686
CPU op-mode(s): 32-bit, 64-bit
CPU(s): 2
Thread(s) per core: 2
Core(s) per socket: 1
CPU socket(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 28
Stepping: 10
CPU MHz: 1666.444
L1d cache: 24K
L1i cache: 32K
L2 cache: 512K
|
debootstrap needs to be able to run executables in the target system. If that'll work, then it'll be fine. If not, it'll blow up obviously.
I'm pretty sure it should work as long as you're running a 64-bit kernel. You can run a 64-bit kernel with a 32-bit userland (but not vice versa). So, worst case, you may need to install a 64-bit kernel on your current Atom system.
Also, note that deboostrap may not make everything 100% ready to boot. E.g., I'm not sure fstab will be set up, or a bootloader installed, etc. If possible, it'll likely be easier to run the Debian Installer on your Sempron box instead.
Or, if you're trying to build a live "CD", see http://live.debian.net/
| debootstrap from Intel Atom (i686) to AMD Sempron (AMD 64) |
1,304,322,391,000 |
Trying to find the most portable way to determine the CPU architecture of a system, be it 32bit x86, 64bit or something else (e.g. ARM). does the arch command exist on all systems? otherwise how do I test this from the shell?
|
arch is a GNU command. It's just a synonym for uname -m. uname -m is portable in that its presence is guaranteed by POSIX and it exists on historical Unix systems except for extremely early ones.
What isn't so portable is the meaning of the output. That does vary between Unix variants.
The output does not tell you whether the system is 32-bit or 64-bit. No command can tell you whether the system is 32-bit or 64-bit, because this is not a well-defined option. See Linux command to return number of bits (32 or 64)? for some ways to report the bitness of a system, for several notions of 32/64-bit.
| does arch exist on all linux/unix systems? |
1,304,322,391,000 |
Recently I downloaded latest kde neon amd64.iso. I don't know how it's corrupt. But can I install amd64 software on i686 architecture ?
|
You CAN run 64bit (=x86_64 in redhat and relatives, =amd64 in debian relatives) or 32bit (i386-i686) software (code, kernel, OS) on 64bit (AMD64,EM64T) enabled x86 compatible hardware (CPU).
Check support with HW vendor. Generally speaking AMD Athlon64 and newer, Intel Xeon Nocona/Pentium Prescott and later are 64bit CPUs.More in https://en.wikipedia.org/wiki/X86
Note: Intel IA-64 Itanium are (usually) not 32bit enabled.
You CAN NOT run 64bit software on 32bit hardware unless you use full HW virtualization (like qemu - not KVM).
You may have 32bit OS running on 64bit HW while running lscpu. Your reported architecture (SW) is then i686 but you can still boot 64bit ISO if it's not corrupted. You may check MD5/SHA checksum to check for corruption.
| Can I install kde neon amd64 on i686 architecture? |
1,304,322,391,000 |
Ran the top command to check CPU performances and memory usage on the New RPi3 while running a browser.
Since we have a 4× ARM Cortex-A53, 1.2GHz, how should I read the result?
$ top
Mem: 327132K used, 620864K free, 29124K shrd, 5800K buff, 164492K cached
CPU: 80% usr 8% sys 0% nic 2% idle 0% io 0% irq 9% sirq
...
80% un-niced processes
8% system processes
2% idle?
My understanding is that when it shows 80%, that means 80% out of 400% full capacity, since we have 4 cores right?
Does that mean that the top command doesn't not calculate the idle correctly?
How do I check the % usage/idle for each core?
My Linux (Image built with Yocto for RPi3)
root@raspberrypi3:~# uname -a
Linux raspberrypi3 4.1.18 #1 SMP Thu Mar 17 10:26:07 CET 2016 armv7l GNU/Linux
root@raspberrypi3:~# lsb_release -a
LSB Version: core-4.1-noarch:core-4.1-arm
Distributor ID: poky
Description: Poky (Yocto Project Reference Distro) 1.8.1
Release: 1.8.1
Codename: fido
top version
root@raspberrypi3:~# top --version
top: unrecognized option '--version'
BusyBox v1.23.1 (2015-10-19 16:33:36 CEST) multi-call binary.
Usage: top [-b] [-nCOUNT] [-dSECONDS]
|
Depending on the version of top, the CPU usage summary might use 100% to mean one core's worth or to mean the total available CPU. Given your output, it appears that you're using the BusyBox version of top; it uses 100% to mean the total available CPU time, so your CPU is fully busy, spending about 80% of its time on computations and about 19% on I/O. The entry for each process also gives stats relative to the whole available processing power, so on a quadcore machine each thread tops out at 25%.
The top version from procps (the version on non-embedded Linux, also the default version on e.g. Raspbian) uses different conventions: for the global CPU consumption, 100% is the total across CPU; but for each process, 100% means one CPU's worth.
htop has a nicer interface and breaks down CPU usage per CPU. There you'd see each CPU's utilization. On individual processes, htop counts one CPU's worth as 100%, like the procps version.
Keep in mind that calculations are not exact, they're based on sampling. (Taking precise CPU utilization measurements would itself take up significant CPU time, especially in cases of high contention.) There isn't a meaningful difference between 2% idle and fully busy.
| Understanding the top command output on an ARM multicore computer |
1,288,603,099,000 |
sha1sum outputs a hex encoded format of the actual sha. I would like to see a base64 encoded variant. possibly some command that outputs the binary version that I can pipe, like so: echo -n "message" | <some command> | base64 or if it outputs it directly that's fine too.
|
If you have the command line utility from OpenSSL, it can produce a digest in binary form, and it can even translate to base64 (in a separate invocation).
printf %s foo | openssl dgst -binary -sha1 | openssl base64 -A
-sha256, -sha512, etc are also supported.
| How can I get a base64 encoded shaX on the cli? |
1,288,603,099,000 |
The cryptographic signature of an RPM can be verified with the rpm -K command. This returns a string containing gpg (or pgp) and ending in OK if the signature is in RPM's database and is valid.
If the package is not signed but the checksums are valid, you'll still get OK, but no gpg.
If the package is signed but the key is missing from the RPM database, you get (GPG) (capital letters) and NOT OKAY, followed by (MISSING KEYS: GPG#deadbeef).
That's handy if I want to figure out what key I should find to install to make my package installation work.
But what if I want to verify which of several keys in my RPM keyring was used to sign a given package?
|
rpm -qa --qf '%{NAME}-%{VERSION}-%{RELEASE} %{SIGPGP:pgpsig} %{SIGGPG:pgpsig}\n'
| How do I tell which GPG key an RPM package was signed with? |
1,288,603,099,000 |
For a class on cryptography, I am trying to drain the entropy pool in Linux (e.g. make /proc/sys/kernel/random/entropy_avail go to 0 and block a command reading from /dev/random) but I can't make it happen. I'm supposed to get reads from /dev/random to block. If I execute these two commands:
watch -n 0.5 cat /proc/sys/kernel/random/entropy_avail
to watch entropy and then:
od -d /dev/random
to dump the random pool, the value from the watch command hovers between 3700 and 3900, and gains and loses only a little while I run this command. I let both commands run for about three minutes with no discernible substantial change in the size of entropy_avail. I didn't do much on the computer during that time. From googling around I find that perhaps a hardware random number generator could be so good that the entropy won't drop but if I do:
cat /sys/devices/virtual/misc/hw_random/rng_available
I see nothing, I just get a blank line. So I have a few questions:
What's replenishing my entropy so well, and how can I find the specific source of randomness?
Is there any way to temporarily disable sources of randomness so I can force this blocking to happen?
|
There is a surprising amount of development going on around the Linux random device. The slow, blocking /dev/random is gone and replaced by a fast /dev/random that never runs out of data.
You'll have to travel back in time, like prior to linux 4.8 (which introduced a much faster crng algorithm) or possibly linux 5.6 (which introduced jitter entropy generation).
There is no way to get the original behavior back in current kernels.
If you are seeing this issue in older versions of Linux, hwrng aside, you might be using haveged or rng-tools rngd, or similar userspace entropy providers.
Some distros install these by default to avoid hangs while waiting for a few random bits, in that case you can uninstall or disable them or try it from within an initrd / busybox shell where no other processes are running.
If the issue still persists, you might just have a very noisy piece of hardware from which kernel keeps collecting entropy naturally.
| How can I force /dev/random to block? |
1,288,603,099,000 |
I’m having fun with OpenSSH, and I know the /etc/ssh directory is for the ssh daemon and the ~/.ssh directory is for a particular user.
Both directories contain private and public keys:
But what is the difference between those keys? I’m confused because the ones I use as a user is in my home directory, and what are the roles of the keys found in /etc/ssh?
|
/etc/ssh provides configuration for the system: default configuration for users (/etc/ssh/ssh_config), and configuration for the daemon (/etc/ssh/sshd_config). The various host files in /etc/ssh are used by the daemon: they contain the host keys, which are used to identify the server — in the same way that users are identified by key pairs (stored in their home directory), servers are also identified by key pairs. Multiple key pairs are used because servers typically offer multiple types of keys: RSA, ECDSA, and Ed25519 in your case. (Users can also have multiple keys.)
The various key files are used as follows:
your private key, if any, is used to identify you to any server you’re connecting to (it must then match the public key stored in the server’s authorized keys for the account you’re trying to connect to);
the server’s private key is used by the client to identify the server; such identities are stored in ~/.ssh/known_hosts, and if a server’s key changes, SSH will complain about it and disable certain features to mitigate man-in-the-middle attacks;
your public key file stores the string you need to copy to remote servers (in ~/.ssh/authorized_keys); it isn’t used directly;
the server’s public key files store strings you can copy to your known hosts list to pre-populate it; it also isn’t used directly.
The last part isn’t used all that often; the default SSH model is known as “TOFU” (trust on first use): a connection is trusted by default the first time it’s used, and SSH only cares about unexpected changes. In some cases though it’s useful to be able to trust the first connection too: a server’s operator can communicate the server’s public keys, and users can add these to their known hosts before the first connection.
See the ssh_config and sshd_config manpages for details (man ssh_config and man sshd_config on your system). The format used for known hosts is described in the sshd manpage.
| What is the difference between /etc/ssh/ and ~/.ssh? |
1,288,603,099,000 |
Is there any program or script available for decrypt Linux shadow file ?
|
Passwords on a linux system are not encrypted, they are hashed which is a huge difference.
It is not possible to reverse a hash function by definition.
For further information see the Hash Wikipedia entry.
Which hash function is used, depends on your system configuration. MD5 and blowfish are common examples for used hash functions.
So the "real" password of a user is never stored on the system.
If you login, the string you enter as the password will be hashed and checked against your /etc/shadow file. If it matches, you obviously entered the correct password.
Anyway there are still some attack vectors against the password hashes. You could keep a dictionary of popular passwords and try them automatically. There are a lot of dictionaries available on the internet. Another approach would be to just try out all possible combinations of characters which will consume a huge amount of time. This is known as bruteforce attack.
Rainbowtables are another nice attack vector against hashes. The idea behind this concept, is to simply pre calculate all possible hashes and then just lookup a hash in the tables to find the corresponding password. There are several distributed computing projects to create such tables, the size differs on the characters used and is mostly several TB.
To minimize the risk of such lookup tables its a common practice and the default behaviour in Unix/Linux to add a so called "salt" to the password hash. You hash your password, add a random salt value to the hash and hash this new string again. You need to save the new hash and the salt to be able to check if a entered value is the correct password. The huge advantage of this method is, that you would have to create new lookup tables for each unique salt.
A popular tool to execute dictionary or brute force attacks against user passwords of different operating systems is John The Ripper (or JTR).
See the project homepage for more details:
John the Ripper is a fast password
cracker, currently available for many
flavors of Unix, Windows, DOS, BeOS,
and OpenVMS. Its primary purpose is to
detect weak Unix passwords.
| Program for decrypt linux shadow file |
1,288,603,099,000 |
In order to verify a password hash we can use openssl passwd as shown below and explained here
openssl passwd $HASHING-ALGORITHM -salt j9T$F31F/jItUvvjOv6IBFNea/ $CLEAR-TEXT-PASSWORD
However, this will work only for the following algorithm: md5, crypt, apr1, aixmd5, SHA-256, SHA-512
How to calculate the hashing password, from bash or python or nodeJS for a $CLEAR-TEXT-PASSWORD, with salt using yescrypt ?
|
perl's crypt() or python3's crypt.crypt() should just be an interface to your system's crypt() / crypt_r(), so you should be able to do:
$ export PASS=password SALT='$y$j9T$F31F/jItUvvjOv6IBFNea/$'
$ perl -le 'print crypt($ENV{PASS}, $ENV{SALT})'
$y$j9T$F31F/jItUvvjOv6IBFNea/$pCTLzX1nL7rq52IXxWmYiJwii4RJAGDJwZl/LHgM/UD
$ python3 -c 'import crypt, os; print(crypt.crypt(os.getenv("PASS"), os.getenv("SALT")))'
$y$j9T$F31F/jItUvvjOv6IBFNea/$pCTLzX1nL7rq52IXxWmYiJwii4RJAGDJwZl/LHgM/UD
(provided your system's crypt() supports the yescript algorithm with the $y$... salts)
| Verifying a hashed salted password that uses yescrypt algorithm |
1,288,603,099,000 |
How can I make a file digest under Linux with the RIPEMD-160 hash function, from the command line?
|
You can use openssl for that (and for other hash algorithms):
$ openssl list-message-digest-commands
md4
md5
mdc2
rmd160
sha
sha1
$ openssl rmd160 /usr/bin/openssl
RIPEMD160(/usr/bin/openssl)= 788e595dcadb4b75e20c1dbf54a18a23cf233787
| RIPEMD-160 file digest |
1,288,603,099,000 |
I am currently in the process of setting up an encrypted homeserver with zfs and geli.
However I am not sure what the correct partition type for geli-crypted filesystems are.
Do I just take 'freebsd-zfs' like I would do for a noncrypted zfs partition?
Do I go with the more generic 'freebsd'?
All I want to know in the end is what value to pass as the '-t' parameter when calling 'gpart add'.
|
In case of doubt, use 0xDA (“raw / nōn-filesystem data”).
That will always work, and be ignored by virtuall all OSes, so geli can just use the corresponding block device.
| What is the correct partition type for a geli-crypted partition on FreeBSD? |
1,288,603,099,000 |
Is it possible to use kernel cryptographic functions in the userspace? Let's say, I don't have md5sum binary installed on my system, but my kernel has md5sum support. Can I use the kernel function from userspace? How would I do it?
Another scenario would be, if I don't trust the md5sum binary on my system (my system could have been compromised), but I trust my kernel (I am using cryptographically signed kernel modules).
|
According to this article titled: A netlink-based user-space crypto API it would appear that what you're proposing is possible. I'm not sure how to answer your question any further than this article though.
| Using kernel cryptographic functions |
1,288,603,099,000 |
I need to send a private key file to someone (a trusted sysadmin) securely. I suggested a couple options, but he replied as follows:
Hi, I don't have neither LastPass nor GnuPGP but I'm using ssl
certificates - this message is signed with such so you will be able to
send a message to me and encrypt it with my public key.
I used openssl to obtain his certificate:
openssl pkcs7 -in smime.p7s -inform DER -print_certs
The certificate is issued by:
issuer=/O=Root CA/OU=http://www.cacert.org/CN=CA Cert Signing Authority/[email protected]
(Firefox doesn't have a root certificate from cacert.org.)
Now, how do I encrypt the key file I wish to send to him? I prefer to use a command line tool available in Ubuntu.
@lgeorget:
$ openssl pkcs7 -inform DER -outform PEM -in smime.p7s -out smime.pem
$ openssl smime -encrypt -text -in /home/myuser/.ssh/mykeyfile smime.pem
unable to load certificate
139709295335072:error:0906D06C:PEM routines:PEM_read_bio:no start line:pem_lib.c:696:Expecting: TRUSTED CERTIFICATE
and
$ openssl pkcs7 -in smime.p7s -inform DER -print_certs
subject=/CN=Wojciech Kapcia/[email protected]/[email protected]
issuer=/O=Root CA/OU=http://www.cacert.org/CN=CA Cert Signing Authority/[email protected]
-----BEGIN CERTIFICATE-----
MIIFzjCCA7agAwIBAgIDDR9oMA0GCSqGSIb3DQEBBQUAMHkxEDAOBgNVBAoTB1Jv
b3QgQ0ExHjAcBgNVBAsTFWh0dHA6Ly93d3cuY2FjZXJ0Lm9yZzEiMCAGA1UEAxMZ
dEBjYWNlcnQub3JnMB4XDTEzMDQxODA3NDEzNFoXDTE1MDQxODA3NDEzNFowcDEY
MBYGA1UEAxMPV29qY2llY2ggS2FwY2lhMSkwJwYJKoZIhvcNAQkBFhp3b2pjaWVj
[snip]
N1lNLq5jrGhqMzA2ge57cW2eDgCL941kMmIPDUyx+pKAYj1I7IibN3wcP1orOys3
amWMrFRa30LBu6jPYy2TeeoQetKnabefMNE3Jv81gn41mPOs3ToPXEUmYU18VZ75
Efd/qu4SV/3SMdySSNmPAVQdXYAxBEXoN5b5FpUW7KeZnjoX4fkEUPeBnNwcptTC
d1w=
-----END CERTIFICATE-----
|
You can do
openssl smime -encrypt -text -in <file> smime.p7s
where <file> is the file you want to encrypt. If the file smime.p7s is in DER format instead of PEM, you will have to convert it with :
openssl pkcs7 -inform DER -outform PEM -in smime.p7s -out smime.pem
You obtain a file you can send to your sysadmin. If you are brave enough you can remove -text and play with the option -to, -subject, etc. to get a valid email file you can directly send to a SMTP server.
If the root certificate of the certificate you use to encrypt is not recognized by your operating system but YOU trust it, you can add it to the certificate base.
cp smime.pem /usr/local/share/ca-certificates/certificate.crt
sudo update-ca-certificates
The certificate must have the .crt extension. Details here.
| How to encyrpt a message using someone's SSL smime.p7s file |
1,288,603,099,000 |
In the Linux kernel configuration, I see these options:
config CRYPTO_PCRYPT
tristate "Parallel crypto engine"
depends on SMP
select PADATA
select CRYPTO_MANAGER
select CRYPTO_AEAD
help
This converts an arbitrary crypto algorithm into a parallel
algorithm that executes in kernel threads.
config CRYPTO_CRYPTD
tristate "Software async crypto daemon"
select CRYPTO_BLKCIPHER
select CRYPTO_HASH
select CRYPTO_MANAGER
select CRYPTO_WORKQUEUE
help
This is a generic software asynchronous crypto daemon that
converts an arbitrary synchronous software crypto algorithm
into an asynchronous algorithm that executes in a kernel thread.
What is the difference between arbitrary algorithms, asynchronous algorithms, and parallel algorithms in cryptography?
|
In synchronous execution, you wait for the task to finish before moving on to another task.
In asynchronous execution, you can move on to another task before the previous one finishes.
These terms are not specifically related to cryptography. In general, text-book descriptions of crypto algorithms are neither synchronous not asynchronous, but the implementations of the algorithm can be either.
Consider for instance a high-level description of AES:
AES is a block cipher, so each 128 bit input block goes through the following transformations:
KeyExpansion
InitialRound
AddRoundKey
Rounds (10, 12 or 14 rounds of repetition depending on key size)
SubBytes
ShiftRow
MixColumns
AddRoundKey
Final Round
SubBytes
ShiftRows
AddRoundKey
Depending on the cipher mode used, the encryption of the blocks can be either dependent or independent. For instance, in Cipher-block chaining (CBC) mode encryption, the ciphertext from the previous block is used to transform the plaintext of the next block before the actual AES transformations. In this case, the implementation of the algorithm must be synchronous, as the output of the previous step is needed as input for the next.
On the other hand, in Electronic codebook (ECB) mode, each block is encrypted separately. This means that the implementation of the algorithm may be either synchronous as before, or asynchronous, in which case the the encryptions of the next block can begin even while the AES rounds for the previous blocks are ongoing.
In our example, the execution could proceed in separate threads of execution for each block, or pipelined, i.e. the algorithm is split up into multiple independent parts. For instance, the initial round for the next block could be performed while the round repetitions for the previous block are ongoing. Pipelining is a common technique in hardware crypto implementations.
The cryptd module is a template that takes a synchronous software crypto algorithm and converts it to an asynchronous one by executing it in a kernel thread.
The concurrent execution of the asynchronous algorithm may either be interleaved, e.g. the algorithm executes on a single CPU core, and switches back and forth between the operations on the two blocks, or parallel, e.g. the execution of the algorithm proceeds simultaneously on multiple cores, each core processing a block of their own.
The pcrypt parallel crypto template takes a crypto algorithm and converts it to process the crypto transformations in parallel.
I think that arbitrary in these cases refers only to that the architecture of the modules are general enough to be applied to any cryptographic algorithm. I believe that for the moment only AEAD algorithms are supported by pcrypt.
| Asynchronous and parallel algorithms in the Linux kernel |
1,288,603,099,000 |
I am writing a simple bash script to download stream from Twitter:
curl -H "Authorization: ${TOKEN}" "$URL"
and I am looking for a way to generate the $TOKEN. I have all the input necessary (CONSUMER_KEY, ...), but where can I get the program oauth_sign that will generate the token from the input data?
TOKEN=$(oauth_sign $CONSUMER_KEY $CONSUMER_SECRET $ACCESS_TOKEN $ACCESS_SECRET GET $URL)
|
I just downloaded the link @goldilocks provided, http://acme.com/software/oauth_sign/, and confirmed that it compiles. Looks very straightfoward.
compile
$ make
gcc -c -Wall -O liboauthsign.c
liboauthsign.c: In function ‘oauth_sign’:
liboauthsign.c:123:5: warning: implicit declaration of function ‘getpid’
liboauthsign.c:305:5: warning: pointer targets in passing argument 4 of ‘HMAC’ differ in signedness
/usr/include/openssl/hmac.h:99:16: note: expected ‘const unsigned char *’ but argument is of type ‘char *’
rm -f liboauthsign.a
ar rc liboauthsign.a liboauthsign.o
ranlib liboauthsign.a
gcc -Wall -O oauth_sign.c -L. -loauthsign -lcrypto -o oauth_sign
usage
$ ./oauth_sign --help
usage: oauth_sign [-q] consumer_key consumer_key_secret token token_secret method url [name=value ...]
excerpt from README
To use it, you supply the four cryptographic cookies and the method
and URL of the request. If it's a POST request with extra parameters,
you have to give those too. Oauth_sign puts all this together and
makes the signature string. The signature is generated using
HMAC-SHA1 as specified in RFC section 3.4.2, and is returned as an
Authorization header value as specified in RFC section 3.5.1. This
header can then be used in an HTTP request via, for example, the
-h flag in http_get(1) and http_post(1) or the -H flag in curl(1).
Looks like it comes with a library exposing the fuctions for use in your own C applications as well.
| generate Token for OAuth (Twitter) |
1,288,603,099,000 |
I know that GnuPG is all about security, thus it's not giving many chance of retrieve private keys (otherwise anyone could do it) but I've got a private key, and my own rev.asc file.
I had to reinstall my Ubuntu box (former Ubuntu Studio) and I have backup of /home and /etc.
Is it possible to recover my GnuPG key instead of revoke it and create another one?
|
By default, GPG stores everything under the .gnupg directory in your home directory. (Your encrypted private key should be in ~/.gnupg/secring.gpg).
Restoring the entire ~/.gnupg directory from your backup will do the trick.
| How to restore GnuPG key after reinstall? |
1,288,603,099,000 |
I am looking for a simple way (perhaps using openssl) to generate SCRAM-SHA-1 hash of a password for use for Prosody Jabber Server. The passwords on the server are stored in the following form:
["iteration_count"] = 4096;
["stored_key"] = "f76e63cb5bb7f78e99b07196646c39a0f9422ef7";
["salt"] = "5317fe92-be09-4e0c-8501-55e5fb325543";
["server_key"] = "eb701c012450813185104934f88a9d07a7f211d9";
Can anybody suggest something ?
|
Correct me if I'm wrong, cryptography isn't my strong suite 8-) but this library looks to give you what you want. It's in Python:
http://pythonhosted.org/passlib/lib/passlib.hash.scram.html
You can use it like so:
>>> hash = scram.encrypt("password", rounds=1000, algs="sha-1,sha-256,md5")
>>> hash
'$scram$1000$RsgZo7T2/l8rBUBI$md5=iKsH555d3ctn795Za4S7bQ,sha-1=dRcE2AUjALLF
tX5DstdLCXZ9Afw,sha-256=WYE/LF7OntriUUdFXIrYE19OY2yL0N5qsQmdPNFn7JE'
References
Modular Crypt Format
GNU SASL Library - Libgsasl
| generate SCRAM-SHA-1 hash of a password |
1,288,603,099,000 |
scrypt is a password-based key derivation function that can be tuned to use a large amount of memory.
I want a command-line interface to calculate the key given my own values for parameters: password, salt, n, r, p, length (these are like the parameters password, salt, cost in bcrypt).
Preferably, I can use something like scrypt --password message --salt mysalt -n 1024 -r 8 -p 8 --length 32 and get just 9a5ef931679f5003248953b6eea3827ca32eb6d07a417126670ba8555f40a0e0.
What software can do this job?
|
This implementation of scrypt appears to cover your requirement, see https://github.com/jkalbhenn/scrypt
scrypt-kdf [options ...] password [salt N r p size salt-size]
string string integer integer integer integer integer]
options
-b|--base91-input password and salt arguments are base91 encoded
-c|--check hash test if hash is derived from a password
-h|--help display this text and exit
-p|--crypt use unix crypt format
-v|--version output version information and exit
| scrypt key calculator |
1,288,603,099,000 |
When encrypting a file with symmetric key, most common utilities (such as gpg, mcrypt, etc) store information in the encrypted message which can be used to verify integrity of the key during decryption. E.g., if the wrong key is entered during decryption, gpg will retort with:
gpg: decryption failed: bad key
Suppose I am encrypting a file containing a string which is random. Then the key integrity check used in the standard utilities adds a vulnerability.
Is there a common utility which will not store any information or redundancy for verifying key/message integrity (and so will "decrypt" an encrypted file for any supplied key)?
|
As an alternative to my other answer, I'd like to offer something else. Something beautiful ... dm-crypt.
Plain dm-crypt (without LUKS) doesn't store anything about the key; on the contrary, cryptsetup is perfectly happy to open a plain device with any password and start using it. Allow me to illustrate:
[root:tmp]# fallocate -l 16M cryptfile
[root:tmp]# cryptsetup --key-file - open --type plain cryptfile cfile-open <<<"pa55w0rd"
Note: Your cryptfile has to be greater or equal than 512 bytes. I assume due to the minimum sector size cryptsetup enforces.
At this point, you would want to write all your random data out to the /dev/mapper/cfile-open. It would seem prudent to me that you size the original cryptfile appropriately ahead of time so that you will use all the space; however, you could just as easily treat this as another added bit of security-through-obscurity and make a note of exactly how much data you wrote. (This would only really work if the underlying blocks were already semi-random, i.e., if you're not going to completely fill the file, you should create it with openssl rand or dd if=/dev/urandom instead of fallocate.) ... You could even use dd to start writing somewhere in the middle of the device.
For now, I'll do something simpler.
[root:tmp]# cryptsetup status cfile-open
/dev/mapper/cfile-open is active.
type: PLAIN
cipher: aes-cbc-essiv:sha256
keysize: 256 bits
device: /dev/loop0
loop: /tmp/cryptfile
offset: 0 sectors
size: 32768 sectors
mode: read/write
[root:tmp]# b $((32768*512))
B KiB MiB GiB TiB PiB EiB
16777216 16384.0 16.00 .01 0 0 0
[root:tmp]# ll cryptfile
-rw-r--r--. 1 root root 16777216 Feb 21 00:28 cryptfile
[root:tmp]# openssl rand -out /dev/mapper/cfile-open $((32768*512))
[root:tmp]# hexdump -n 16 -C /dev/mapper/cfile-open
00000000 00 1d 2d 11 ac 38 c4 d3 cc 81 4f 32 de 64 01 ca |..-..8....O2.d..|
00000010
[root:tmp]# cryptsetup close cfile-open
At this point I've filled my encrypted file with 16 MiB of random data. Watch what happens when I open it again using the wrong passphrase and then just to be clear, I'll open it again with the correct one and you'll see the original data is still intact.
[root:tmp]# cryptsetup --key-file - open --type plain cryptfile cfile-open <<<"pass"
[root:tmp]# hexdump -n 16 -C /dev/mapper/cfile-open
00000000 89 97 91 26 b5 46 87 0c 67 87 d8 4a cf 78 e6 d8 |...&.F..g..J.x..|
00000010
[root:tmp]# cryptsetup close cfile-open
[root:tmp]# cryptsetup --key-file - open --type plain cryptfile cfile-open <<<"pa55w0rd"
[root:tmp]# hexdump -n 16 -C /dev/mapper/cfile-open
00000000 00 1d 2d 11 ac 38 c4 d3 cc 81 4f 32 de 64 01 ca |..-..8....O2.d..|
00000010
[root:tmp]#
Enjoy.
| File encryption utility without key integrity check (symmetric key) |
1,288,603,099,000 |
I installed a kernel source from the official Linux kernel repository (http://www.kernel.org/pub/linux/kernel/v4.x/linux-4.15.tar.bz2) and I recompiled it with some needed options to support the mobility IPv6. When I needed a module to encrypt some data I didn't find it among the rest of the modules already built. The modules that I need are: "echainiv" and "authenc".
|
The first step is to determine what configuration options you need to set in order for the module to build. I use
make menuconfig
for that; / followed by the configuration option you want will tell you where to find it and what its dependencies are. For ECHAINIV, you need to enable CRYPTO and then enable ECHAINIV (as a module since that’s what you’re after — in make menuconfig, the entry must show <M>, not <*>).
To build the module, look for the directory containing the corresponding source code:
find . -name echainiv\*
The code lives in crypto, so
make crypto/echainiv.ko
(from the top-level directory) will build the module for you.
To install the module, assuming you’re running the target kernel, run
sudo mkdir -p /lib/modules/$(uname -r)/kernel/crypto
sudo cp -i crypto/echainiv.ko /lib/modules/$(uname -r)/kernel/crypto
| How to build a specific kernel module? |
1,288,603,099,000 |
How do I add a key to a keyring in /proc/keys ?
My openembedded Linux does not come with a keyctl command program.
And all I can find on google is the programming interface, but I would like to do it from console input.
|
I don't know what you mean by "console input" but I guess you want to add and remove keys from shell scripts or the command line or such.
The interface to the kernel keyring is a set of system calls such as add_key(2). You cannot access system calls directly from the command line.
keyctl is the command line interface to the kernel keyring so you will need it.
| add key to proc/keys |
1,288,603,099,000 |
Is there a program that displays my SSH RSA key fingerprint
43:51:43:a1:b5:fc:8b:b7:0a:3a:a9:b1:0f:66:73:a8
as PGP words?
Added on 16th October 2012:
I decided to write my own code. I found pgp words from Wikipedia. However, I find some word capitalized while others not. Is this a typo, or are they to be used as is?
|
20131029: I now have this on github.
My solution. Still working on it. Any tips would be helpful. I plan to put this out on git once I am done with all due acknowledgements. I tried to make this behave as cat - work from both command line and stdin.
ssh-keygen -lf ~/.ssh/known_hosts | pgpWords.py
pgpWords.py 1:2:3
pgpWords.py 1:2 or pgpWords.py 1::1 does not work. I also intended to give a healthy output of return codes to allow automating tasks at the shell. check return codes using echo $?
#! /usr/bin/python3.2
import sys
import re
import fileinput
def getPGPWords(givenString):
pgpWords=[['aardvark','adroitness'],['absurd','adviser'],['accrue','aftermath'],['acme','aggregate'],['adrift','alkali'],['adult','almighty'],['afflict','amulet'],['ahead','amusement'],['aimless','antenna'],['Algol','applicant'],['allow','Apollo'],['alone','armistice'],['ammo','article'],['ancient','asteroid'],['apple','Atlantic'],['artist','atmosphere'],['assume','autopsy'],['Athens','Babylon'],['atlas','backwater'],['Aztec','barbecue'],['baboon','belowground'],['backfield','bifocals'],['backward','bodyguard'],['banjo','bookseller'],['beaming','borderline'],['bedlamp','bottomless'],['beehive','Bradbury'],['beeswax','bravado'],['befriend','Brazilian'],['Belfast','breakaway'],['berserk','Burlington'],['billiard','businessman'],['bison','butterfat'],['blackjack','Camelot'],['blockade','candidate'],['blowtorch','cannonball'],['bluebird','Capricorn'],['bombast','caravan'],['bookshelf','caretaker'],['brackish','celebrate'],['breadline','cellulose'],['breakup','certify'],['brickyard','chambermaid'],['briefcase','Cherokee'],['Burbank','Chicago'],['button','clergyman'],['buzzard','coherence'],['cement','combustion'],['chairlift','commando'],['chatter','company'],['checkup','component'],['chisel','concurrent'],['choking','confidence'],['chopper','conformist'],['Christmas','congregate'],['clamshell','consensus'],['classic','consulting'],['classroom','corporate'],['cleanup','corrosion'],['clockwork','councilman'],['cobra','crossover'],['commence','crucifix'],['concert','cumbersome'],['cowbell','customer'],['crackdown','Dakota'],['cranky','decadence'],['crowfoot','December'],['crucial','decimal'],['crumpled','designing'],['crusade','detector'],['cubic','detergent'],['dashboard','determine'],['deadbolt','dictator'],['deckhand','dinosaur'],['dogsled','direction'],['dragnet','disable'],['drainage','disbelief'],['dreadful','disruptive'],['drifter','distortion'],['dropper','document'],['drumbeat','embezzle'],['drunken','enchanting'],['Dupont','enrollment'],['dwelling','enterprise'],['eating','equation'],['edict','equipment'],['egghead','escapade'],['eightball','Eskimo'],['endorse','everyday'],['endow','examine'],['enlist','existence'],['erase','exodus'],['escape','fascinate'],['exceed','filament'],['eyeglass','finicky'],['eyetooth','forever'],['facial','fortitude'],['fallout','frequency'],['flagpole','gadgetry'],['flatfoot','Galveston'],['flytrap','getaway'],['fracture','glossary'],['framework','gossamer'],['freedom','graduate'],['frighten','gravity'],['gazelle','guitarist'],['Geiger','hamburger'],['glitter','Hamilton'],['glucose','handiwork'],['goggles','hazardous'],['goldfish','headwaters'],['gremlin','hemisphere'],['guidance','hesitate'],['hamlet','hideaway'],['highchair','holiness'],['hockey','hurricane'],['indoors','hydraulic'],['indulge','impartial'],['inverse','impetus'],['involve','inception'],['island','indigo'],['jawbone','inertia'],['keyboard','infancy'],['kickoff','inferno'],['kiwi','informant'],['klaxon','insincere'],['locale','insurgent'],['lockup','integrate'],['merit','intention'],['minnow','inventive'],['miser','Istanbul'],['Mohawk','Jamaica'],['mural','Jupiter'],['music','leprosy'],['necklace','letterhead'],['Neptune','liberty'],['newborn','maritime'],['nightbird','matchmaker'],['Oakland','maverick'],['obtuse','Medusa'],['offload','megaton'],['optic','microscope'],['orca','microwave'],['payday','midsummer'],['peachy','millionaire'],['pheasant','miracle'],['physique','misnomer'],['playhouse','molasses'],['Pluto','molecule'],['preclude','Montana'],['prefer','monument'],['preshrunk','mosquito'],['printer','narrative'],['prowler','nebula'],['pupil','newsletter'],['puppy','Norwegian'],['python','October'],['quadrant','Ohio'],['quiver','onlooker'],['quota','opulent'],['ragtime','Orlando'],['ratchet','outfielder'],['rebirth','Pacific'],['reform','pandemic'],['regain','Pandora'],['reindeer','paperweight'],['rematch','paragon'],['repay','paragraph'],['retouch','paramount'],['revenge','passenger'],['reward','pedigree'],['rhythm','Pegasus'],['ribcage','penetrate'],['ringbolt','perceptive'],['robust','performance'],['rocker','pharmacy'],['ruffled','phonetic'],['sailboat','photograph'],['sawdust','pioneer'],['scallion','pocketful'],['scenic','politeness'],['scorecard','positive'],['Scotland','potato'],['seabird','processor'],['select','provincial'],['sentence','proximate'],['shadow','puberty'],['shamrock','publisher'],['showgirl','pyramid'],['skullcap','quantity'],['skydive','racketeer'],['slingshot','rebellion'],['slowdown','recipe'],['snapline','recover'],['snapshot','repellent'],['snowcap','replica'],['snowslide','reproduce'],['solo','resistor'],['southward','responsive'],['soybean','retraction'],['spaniel','retrieval'],['spearhead','retrospect'],['spellbind','revenue'],['spheroid','revival'],['spigot','revolver'],['spindle','sandalwood'],['spyglass','sardonic'],['stagehand','Saturday'],['stagnate','savagery'],['stairway','scavenger'],['standard','sensation'],['stapler','sociable'],['steamship','souvenir'],['sterling','specialist'],['stockman','speculate'],['stopwatch','stethoscope'],['stormy','stupendous'],['sugar','supportive'],['surmount','surrender'],['suspense','suspicious'],['sweatband','sympathy'],['swelter','tambourine'],['tactics','telephone'],['talon','therapist'],['tapeworm','tobacco'],['tempest','tolerance'],['tiger','tomorrow'],['tissue','torpedo'],['tonic','tradition'],['topmost','travesty'],['tracker','trombonist'],['transit','truncated'],['trauma','typewriter'],['treadmill','ultimate'],['Trojan','undaunted'],['trouble','underfoot'],['tumor','unicorn'],['tunnel','unify'],['tycoon','universe'],['uncut','unravel'],['unearth','upcoming'],['unwind','vacancy'],['uproot','vagabond'],['upset','vertigo'],['upshot','Virginia'],['vapor','visitor'],['village','vocalist'],['virus','voyager'],['Vulcan','warranty'],['waffle','Waterloo'],['wallet','whimsical'],['watchword','Wichita'],['wayside','Wilmington'],['willow','Wyoming'],['woodlark','yesteryear'],['Zulu','Yucatan']];
matchedHexString=re.findall('[0-9a-fA-F]+:.*:[0-9a-fA-F]+',givenString);
if (len(matchedHexString)>0):
numStr=matchedHexString[0].split(':');
try:
PGPWordsString='';
i=1;
for hexNum in numStr:
PGPWordsString=PGPWordsString+' '+pgpWords[int(hexNum,16)][i%2];
i=i+1;
PGPWordsString=PGPWordsString.strip();
except:
PGPWordsString=-2;
return PGPWordsString;
else:
return -1;
def main():
if (len(sys.argv)>1):
pgpWords=getPGPWords(sys.argv[1]);
else:
for line in fileinput.input():
pgpWords=getPGPWords(line);
if (isinstance(pgpWords, int)):
return pgpWords;
else:
print (pgpWords)
return 0;
if __name__ == "__main__":
r=main()
sys.exit(r)
| PGP words of an RSA fingerprint |
1,288,603,099,000 |
I need to encrypt some data using aes-256-ecb since a backend code expects it as a configuration. I'm able to encrypt using a key which is derived from a passphrase using:
openssl enc -p -aes-256-ecb -nosalt -pbkdf2 -base64 -in data-plain.txt -out data-encrypted.txt | sed 's/key=//g'
This encrypts using derived key and outputs the key in console.
However, I couldn't find how to do it with a generated key, something like:
Generate a 256-bit key using:
openssl rand -base64 32 > key.data
Then use this key during encryption, with something like:
openssl enc -p -aes-256-ecb -key=key.data -nosalt -pbkdf2 -base64 -in data-plain.txt -out data-encrypted.txt
Is this possible?
|
You have to specify the key in hex using -K. Note that you also need to specify the IV with -iv for some ciphers and modes of operation. You will also need to add -nopad for ECB decryption if you are decrypting a raw AES block (i.e. no padding is used). Be aware that ECB is highly insecure if used to encrypt more than one block.
| openssl encrypt by specifying AES 256 key instead of passphrase |
1,288,603,099,000 |
I have CentOS 7 and Apache and the Haproxy load balancer with SSL support.
How to make the server compliant to FIPS 140-2?
From CHAPTER 10. FEDERAL STANDARDS AND REGULATIONS | redhat.com
I got the following instructions:
/etc/sysconfig/prelink
PRELINKING=no
# yum install dracut-fips
# dracut -f
fips=1
$ df /boot
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda1 495844 53780 416464 12% /boot
boot=/dev/sda1
/etc/ssh/sshd_config
Protocol 2
Ciphers aes128-ctr,aes192-ctr,aes256-ctr,aes128-cbc,3des-cbc,aes192-cbc,aes256-cbc
Macs hmac-sha1,hmac-sha2-256,hmac-sha2-512
Is this enough to make my HTTPS services FIPS 140-2 compliant?
|
In addition to SSL/TLS, OpenSSL provides general purpose crypto libraries. In context, FIPS-mode merely removes access to all of the algorithms that have not been approved by NIST. If in FIPS mode, the following command should fail.
openssl md5 filename
On RedHat system at least, one can also find the status of FIPS mode in the proc file system.
cat /proc/sys/crypto/fips_enabled
The result of the command produces 0 (FIPS is not enabled.) or 1 (FIPS is enabled.).
Is it possible that one may need to regenerate certificates for the Web servers after entering FIPS mode? Perhaps.
Is the FIPS-mode requirement also a small part of applying a STIG? There exists a very convenient website to view STIG requirements. The RHEL6 STIG is available at stigviewer.com. Included in the requirements are the commands to apply and verify the settings. It's quite easy to do. The official source is somewhat more difficult to use, but a RHEL7 STIG does exists there. The STIGs from the official sources are produced in XML and expected to be viewed with "STIG Viewer Version 2.7," which can be found in the list of STIGs.
Update: the RHEL7 STIG is now available at stigviewer.com.
Do the very best you can do, and then let the Information Assurance Officer tell you what more may need to be done. In addition, one could choose to apply the draft version of the RHEL7 STIG at the time of installation by choosing a security policy, as illustrated below. This policy does some of the "heavy lifting" in STIG configuration, but one would still need to verify that all STIG settings have been applied.
There are also other applicable STIGs, one for the Web server and one for the Web application. A database STIG may also apply.
| FIPS 140-2 compliance for Apache and Haproxy on CentOS 7 [closed] |
1,288,603,099,000 |
I would like to have an encrypted DNS queries + a DNS Cache + Domain Name System Security Extensions (DNSSEC) .
I used this bash script to install DNSCrypt and I choosed to use dnscrypt.eu servers :
DNSCrypt.eu (no logs)
Holland
Server address:
176.56.237.171:443
Provider name
2.dnscrypt-cert.dnscrypt.eu
Public key
67C0:0F2C:21C5:5481:45DD:7CB4:6A27:1AF2:EB96:9931:40A3:09B6:2B8D:1653:1185:9C66
I installed ( apt-get install unbound ) Unbound and my unbound.conf file contains :
#
# See the unbound.conf(5) man page.
#
# See /usr/share/doc/unbound/examples/unbound.conf for a commented
# reference config file.
server:
# The following line will configure unbound to perform cryptographic
# DNSSEC validation using the root trust anchor.
auto-trust-anchor-file: "/var/lib/unbound/root.key"
server:
verbosity: 1
num-threads: 4
interface: 0.0.0.0
do-ip4: yes
do-udp: yes
do-tcp: yes
access-control: 192.168.0.0/24 allow
do-not-query-localhost: no
chroot: ""
logfile: "/var/log/unbound.log"
use-syslog: no
hide-identity: yes
hide-version: yes
harden-glue: yes
harden-dnssec-stripped: yes
use-caps-for-id: yes
private-domain: "localhost"
local-zone: "localhost." static
local-data: "freebox.localhost. IN A 192.168.0.254"
local-data-ptr: "192.168.0.254 freebox.localhost"
python:
remote-control:
forward-zone:
name: "."
forward-addr: 127.0.0.1@40
Like you see, I added this line to activate DNSSEC :
server:
# The following line will configure unbound to perform cryptographic
# DNSSEC validation using the root trust anchor.
auto-trust-anchor-file: "/var/lib/unbound/root.key"
Now, when I enter : sudo service unbound start
This is the error that I get :
* Restarting recursive DNS server unbound
[1382606879] unbound[8878:0] error: bind: address already in use
[1382606879] unbound[8878:0] fatal error: could not open ports
My question is of course about the error !
Also, is it useful to use DNSSEC in an ordinary laptop (not a DNS server) or it is just useful for DNS Servers ?
|
Thanks @Jiri Xichtkniha and @Anthon
When typing
sudo lsof -nPi | grep \:53
I can see that bind is also listening on the same port :
TCP *:53 (LISTEN)
I made then a modification on /etc/unbound/unbound.conf by adding this line :
port:533
ps : The port number, default 53, on which the server responds to queries.
Another solution is to change the port of Bind from 53 to another.
| DNSCrypt, Unbound and DNSSEC |
1,288,603,099,000 |
$ ssh 192.168.29.126
The authenticity of host '192.168.29.126 (192.168.29.126)' can't be established.
ECDSA key fingerprint is SHA256:1RG/OFcYAVv57kcP784oaoeHcwjvHDAgtTFBckveoHE.
Are you sure you want to continue connecting (yes/no/[fingerprint])?
What is the "fingerprint" it is asking for?
|
The question asks whether you trust and want to continue connecting to the host that SSH does not recognise. It gives you several ways of answering:
yes, you trust the host and want to continue connecting to it.
no, you do not trust the host, and you do not want to continue connecting to it.
[fingerprint] means that you may paste in the fingerprint, i.e. the hash of the host's key, as the reply to the question. If the pasted fingerprint is the same as the host's fingerprint (as discovered by SSH), then the connection continues; otherwise, it's terminated.
The fingerprint that answers the question in the affirmative is the exact string shown in the actual question (SHA256:1RG/OFcYAVv57kcP784oaoeHcwjvHDAgtTFBckveoHE in your case). If you have stored this fingerprint elsewhere, it's easier to paste it in from there than to compare the long string by eye.
In short: The third answer alternative provides a convenient way to verify that the fingerprint for the host is what you think it should be.
Using the fingerprint to answer the question was introduced in OpenSSH 8.0 (in 2019).
The commit message reads
Accept the host key fingerprint as a synonym for "yes" when accepting
an unknown host key. This allows you to paste a fingerprint obtained
out of band into the yes/no prompt and have the client do the comparison
for you. ok markus@ djm@
| What is the fingerprint ssh is asking for? |
1,288,603,099,000 |
I try to prepare an nvme for encryption, so i first follow this post on SO.
But the speed of dd is really really slow (less than 100 mb/s). I see there is new option to speed up dm-crypt on kernel 5.9 (see this post), but before updating my kernel, i want to know if using nvme-cli format zero tools is equivalent to /dev/zero to prepare a disk : https://manpages.debian.org/testing/nvme-cli/nvme-write-zeroes.1.en.html
The actual (and very very slow) command to prepare disk before luks2 format :
cryptsetup plainOpen --key-file /dev/urandom /dev/nvme0n1p2 ecrypt
dd if=/dev/zero of=/dev/mapper/ecrypt bs=1M status=progress
cryptsetup plainClose
Update :
Going to kernel 5.12 with dmcrypt 2.3.4, i use this new perf options :
cryptsetup plainOpen --perf-no_read_workqueue --perf-no_write_workqueue --key-file /dev/urandom /dev/nvme0n1p2 ecrypt
dmsetup table say option are correctly activated :
ecrypt: 0 1999358607 crypt aes-cbc-essiv:sha256 0000000000000000000000000000000000000000000000000000000000000000 0 259:2 0 2 no_read_workqueue no_write_workqueue
I also verifyed that AES is activated with cpuid :
cpuid | grep -i aes | sort | uniq
AES instruction = true
VAES instructions = false
I have the same problem, dd write zero at 900mb/s and slowly decrease to 45 mb/s ...
|
Found the answer, this is much much better with oflag=direct, jumping from 45 Mb/s to 536 Mb/s :)
dd if=/dev/zero of=/dev/mapper/ecrypt oflag=direct bs=1M status=progress
Thanks to these two posts :
NVMe performance hit when using LUKS encryption
https://stackoverflow.com/questions/33485108/why-is-dd-with-the-direct-o-direct-flag-so-dramatically-faster
| Slow /dev/zero format using dd with nvme to prepare crypto, is there nvme specific tool? |
1,288,603,099,000 |
man 3 crypt clearly states that it uses DES. I thought DES was deprecated, but I see no notice that crypt would be deprecated.
Why does it not use AES instead, and is crypt(3) deprecated?
Is it simply a case of "DES is secure enough for the purpose of this library", and that programs should use other libraries for encryption of important stuff?
|
crypt is easily breakable (it was in fact written by Robert Morris, a famous contributor to the early Unix, as a workbench for codebreaking activities) and should not be used for anything important.
From the crypt manpage:
The DES algorithm itself has a few quirks which make the use of the crypt() interface a very poor choice for anything other than password authentication. If you are planning on using the crypt() interface for a cryptography project, don't do it: get a good book on encryption and one of the widely available DES libraries.
For any real-world use, there are cryptographically stronger alternatives available, such as mcrypt and ccrypt (which uses AES).
| Why does crypt(3) use DES? [closed] |
1,288,603,099,000 |
I recently learned that Linux supports Adiantum as a disk encryption cipher (run cryptsetup benchmark -c xchacha20,aes-adiantum-plain64 to try it out on your system). While Adiantum is primarily meant to provide faster disk encryption for low-end devices that do not support hardware AES acceleration, it is also a wide block cipher mode, meaning that a single bit flip in the ciphertext randomizes an entire sector of plaintext, whereas in AES-XTS mode (the current recommended cipher when AES acceleration is available) a single bit flip in the ciphertext randomizes only a 16 byte block of plaintext. That gives a potential attacker much more granularity and block boundaries to work with. So in this respect Adiantum is strictly more secure than AES-XTS.
Adiantum is a construction built from a hash, a bulk cipher and a block cipher. The currently available variants in my Linux kernel (v5.4) use ChaCha12 or ChaCha20 as bulk cipher. For the intended use on devices without hardware AES acceleration that is great, but now I also want to use it on my laptop with AES acceleration where AES-XTS is about twice as fast as Adiantum.
Are there any wide block ciphers for disk encryption optimized for hardware AES acceleration available for Linux, or being worked on?
@anyone from the future, if the answer is 'no' at the time I'm writing this but has changed by the time you read this question, please do post an answer with the updates at your time.
|
The designers of Adiantum also tackled this question and came up with HCTR2. It is similar to Adiantum but uses AES-XCTR as encryption and POLYVAL as accelerated hash function. Available in Linux kernel version 6.
| Fast wideblock AES disk encryption in Linux? |
1,352,615,133,000 |
I know I can change some fundamental settings of the Linux console, things like fonts, for instance, with dpkg-reconfigure console-setup.
But I'd like to change things like blinkrate, color, and shape (I want my cursor to be a block, at all times). I've seen people accomplishing this. I just never had a chance to ask those people how to do that.
I don't mean terminal emulator windows, I mean the Linux text console, you reach with Ctrl+Alt+F-key
I'm using Linux Mint at the moment, which is a Debian derivate. I'd like to know how to do that in Fedora as well, though.
Edit: I might be on to something
I learned from this website, how to do the changes I need. But I'm not finished yet.
I've settled on using echo -e "\e[?16;0;200c" for now, but I've got a problem: when running applications like vim or irssi, or attaching a screen session, the cursor reverts back to being a blinking gray underscore.
And of course, it only works on this one tty all other text consoles are unaffected.
So how can I make those changes permanent? How can I populate them to other consoles?
|
GitHub Gist: How to change cursor shape, color, and blinkrate of Linux Console
I define the following cursor formatting settings in my .bashrc file (or /etc/bashrc):
##############
# pretty prompt and font colors
##############
# alter the default colors to make them a bit prettier
echo -en "\e]P0000000" #black
echo -en "\e]P1D75F5F" #darkred
echo -en "\e]P287AF5F" #darkgreen
echo -en "\e]P3D7AF87" #brown
echo -en "\e]P48787AF" #darkblue
echo -en "\e]P5BD53A5" #darkmagenta
echo -en "\e]P65FAFAF" #darkcyan
echo -en "\e]P7E5E5E5" #lightgrey
echo -en "\e]P82B2B2B" #darkgrey
echo -en "\e]P9E33636" #red
echo -en "\e]PA98E34D" #green
echo -en "\e]PBFFD75F" #yellow
echo -en "\e]PC7373C9" #blue
echo -en "\e]PDD633B2" #magenta
echo -en "\e]PE44C9C9" #cyan
echo -en "\e]PFFFFFFF" #white
clear #for background artifacting
# set the default text color. this only works in tty (eg $TERM == "linux"), not pts (eg $TERM == "xterm")
setterm -background black -foreground green -store
# http://linuxgazette.net/137/anonymous.html
cursor_style_default=0 # hardware cursor (blinking)
cursor_style_invisible=1 # hardware cursor (blinking)
cursor_style_underscore=2 # hardware cursor (blinking)
cursor_style_lower_third=3 # hardware cursor (blinking)
cursor_style_lower_half=4 # hardware cursor (blinking)
cursor_style_two_thirds=5 # hardware cursor (blinking)
cursor_style_full_block_blinking=6 # hardware cursor (blinking)
cursor_style_full_block=16 # software cursor (non-blinking)
cursor_background_black=0 # same color 0-15 and 128-infinity
cursor_background_blue=16 # same color 16-31
cursor_background_green=32 # same color 32-47
cursor_background_cyan=48 # same color 48-63
cursor_background_red=64 # same color 64-79
cursor_background_magenta=80 # same color 80-95
cursor_background_yellow=96 # same color 96-111
cursor_background_white=112 # same color 112-127
cursor_foreground_default=0 # same color as the other terminal text
cursor_foreground_cyan=1
cursor_foreground_black=2
cursor_foreground_grey=3
cursor_foreground_lightyellow=4
cursor_foreground_white=5
cursor_foreground_lightred=6
cursor_foreground_magenta=7
cursor_foreground_green=8
cursor_foreground_darkgreen=9
cursor_foreground_darkblue=10
cursor_foreground_purple=11
cursor_foreground_yellow=12
cursor_foreground_white=13
cursor_foreground_red=14
cursor_foreground_pink=15
cursor_styles="\e[?${cursor_style_full_block};${cursor_foreground_black};${cursor_background_green};c" # only seems to work in tty
# http://www.bashguru.com/2010/01/shell-colors-colorizing-shell-scripts.html
prompt_foreground_black=30
prompt_foreground_red=31
prompt_foreground_green=32
prompt_foreground_yellow=33
prompt_foreground_blue=34
prompt_foreground_magenta=35
prompt_foreground_cyan=36
prompt_foreground_white=37
prompt_background_black=40
prompt_background_red=41
prompt_background_green=42
prompt_background_yellow=43
prompt_background_blue=44
prompt_background_magenta=45
prompt_background_cyan=46
prompt_background_white=47
prompt_chars_normal=0
prompt_chars_bold=1
prompt_chars_underlined=4 # doesn't seem to work in tty
prompt_chars_blinking=5 # doesn't seem to work in tty
prompt_chars_reverse=7
prompt_reset=0
#start_prompt_coloring="\e[${prompt_chars_bold};${prompt_foreground_black};${prompt_background_green}m"
start_prompt_styles="\e[${prompt_chars_bold}m" # just use default background and foreground colors
end_prompt_styles="\e[${prompt_reset}m"
PS1="${start_prompt_styles}[\u@\h \W] \$${end_prompt_styles}${cursor_styles} "
##############
# end pretty prompt and font colors
##############
| How to change cursor shape, color, and blinkrate of Linux Console? |
1,352,615,133,000 |
Is xeyes purely for fun? What is the point of having it installed by default in many linux distrubutions (in X)?
|
xeyes is not for fun, at least not only. The purpose of this program is to let you follow the mouse pointer which is sometimes hard to see. It is very useful on multi-headed computers, where monitors are separated by some distance, and if someone (say teacher at school) wants to present something on the screen, the others on their monitors can easily follow the mouse with xeyes.
| What is the purpose of xeyes? |
1,352,615,133,000 |
So I have a 4k display, and for some reason Ubuntu decides that it's a good idea to give me a huge cursor instead of something normal. I don't have any DPI settings on the 4k monitor, and I don't want any, so why is the cursor so huge? This is how it looks like:
This is on Ubuntu 15.04 with XFCE4 with Nvidia drivers.
It only looks like that when the mouse is over system-dependant things (or something in that nature), such as the desktop, window titles, menu bar (File, Edit, View, ...) and context menus.
In Firefox it seems to work just fine, except in the bookmarks dropdown.
What I've already tried:
Running update-alternatives to force the cursor theme. This changes the cursor theme, but it doesn't change the cursor size.
Modify the cursor size in dconf-editor. This doesn't do anything.
Put Xcursor.size: 24 in ~/.Xdefaults. This also doesn't appear to do anything.
xrdb -query returns the following:
*customization: -color
Xft.dpi: 96
Xft.hintstyle: hintnone
Xft.rgba: none
Xcursor.theme: DMZ-Black
Xcursor.size: 24
Xcursor.theme_core: 1
|
I ended up solving it myself (kind of). It's not the ultimate way, but it's a workaround that I can live with myself.
Essentially, I took the original sources of the DMZ-Cursors package and created a fork of DMZ-Black, then I removed the 32x32 and 42x42 images, and am now using that as my cursor set.
For convenience sake, I've put up my version of DMZ-Black on Github: https://github.com/codecat/dmzblack-96dpi
If you wish to do the same with DMZ-White, simply download the sources here, copy DMZ-White, and remove all lines mentioning 32x32 and 42x42 in the *.in files. You can also remove the folders for those images if you want. Then simply run make.sh and copy the generated cursor files (in ../xcursors) to your cursors folder. (You can take my install script and change_cursor.sh as an example.)
| Cursor is huge on Ubuntu due to high resolution monitor |
1,352,615,133,000 |
I was wondering if there is a feature in linux like OSX "shake to locate cursor", which temporarily makes the user's mouse or trackpad cursor much larger when shaken back and forth, making it easier to locate if the user loses track of it.
|
In Linux Mint (18.1) you can go to Preferences > Mouse and, under Locate Pointer you can check a box that will tell the system to "Show position of pointer when the Control key is pressed".
I'm not sure if something similar is available on other distros.
Not quite what you asked for. Possibly useful?
| "Shake to locate cursor" feature |
1,352,615,133,000 |
Blinking is a common practice since the early time of computing, especially for cursors.
However, when I run strace to check their system calls, both a terminal emulator konsole and a shell bash, don't register any kind of timer (through timer_settime()), or interval timer (through setitimer()). Meanwhile, these programs could not use spinlock to wait through a certain time.
Real terminals are capable of doing this, as their controllers can understand the blink escape control sequence. But graphical monitors cannot do these things apparently.
So how do these programs get their text blinking, especially in a graphical environment? Text can also blink in the non-X graphical terminal (like if you press Ctrl+Alt+F2).
How was the blinking terminal cursor invented? This question shows the reason why they were invented, and technical details on how real terminals implement them.
|
I've strace'd gnome-terminal-server, which is the actual process of GNOME Terminal.
When otherwise idle, just blinking the cursor, it resides in a poll(..., 598) or similar kernel call, i.e. a poll() with a slightly shorter than 0.6 second timeout. (GNOME's default for the full blink cycle is 1.2 seconds, therefore 0.6 seconds for each "on" or "off" state. This is shortened by the amount of the actual work it did the last time, like repainting the cursor area.)
This poll() waits until there's activity on one of the given file descriptors, or the timer elapses, or a signal arrives.
This poll() is not implemented "manually", rather VTE (the terminal emulation widget behind GNOME Terminal and quite a few others) registers an event handler in GLib's main loop, and GLib takes care of the actual underlying implementation. It's up to GLib to decide what method to use, e.g. it could use select, or epoll with timerfd, or presumably there are other choices as well.
I see pretty much identical poll() calls when strace'ing konsole, and I suspect most other terminal emulators do something similar, too.
The programs running on the terminal (Bash or other) only output the appropriate escape codes asking for blinking but are not involved in actually making it happen after that. The program printing the escapes might be long gone while the blinking text is still visible on the terminal.
| How is cursor blinking implemented in GUI terminal emulators? |
1,352,615,133,000 |
When running
top -n1 | head
the terminal's cursor disappears.
Running top -n1 brings it back.
Tested in gnome-terminal and tilix in Ubuntu 16.04 and CentOS 7.5.
Running top -n1 | tail doesn't have this issue, so I think, something at the end of top output let the cursor reappear which is not executed when printing the head only.
What causes this and how can I get back the cursor more elegantly ?
|
I wasn't able to recreate this behavior everywhere, but it does show up on Ubuntu 18.04
It is instructive to examine hex dumps of the top output:
$ top -n1 | head -n1 | xxd
00000000: 1b5b 3f31 681b 3d1b 5b3f 3235 6c1b 5b48 .[?1h.=.[?25l.[H
00000010: 1b5b 324a 1b28 421b 5b6d 746f 7020 2d20 .[2J.(B.[mtop -
00000020: 3133 3a34 333a 3034 2075 7020 3120 6d69 13:43:04 up 1 mi
00000030: 6e2c 2020 3120 7573 6572 2c20 206c 6f61 n, 1 user, loa
00000040: 6420 6176 6572 6167 653a 2030 2e38 312c d average: 0.81,
00000050: 2030 2e35 342c 2030 2e32 321b 2842 1b5b 0.54, 0.22.(B.[
00000060: 6d1b 5b33 393b 3439 6d1b 2842 1b5b 6d1b m.[39;49m.(B.[m.
00000070: 5b33 393b 3439 6d1b 5b4b 0a [39;49m.[K.
$ top -n1 | tail -n1 | xxd
00000000: 1b5b 3f31 326c 1b5b 3f32 3568 1b5b 4b .[?12l.[?25h.[K
$
In particular, the sequences starting 0x1b5b3f are ANSI escape sequences, which effectively are meta-data to control things like cursor position and text colour.
In particular, towards the start of the first line of top output, there is ESC [?25l, and towards the end of the last line is ESC [?25h. As per the wikipedia page, these are the respective codes to hide and show the cursor.
By piping the top -n1 output to head, the terminal will receive the hide-cursor command at the start, but not the show-cursor command at the end, and hence the cursor will remain invisible until some other action turns it on again.
@MrShunz suggestion to use the -b option to top is right on. This option disables all of the ANSI escape sequences in top's output, instead just outputting plain ASCII printable text. No cursors will be harmed during the execution of top with -b:
$ top -b -n1 | head -n1 | xxd
00000000: 746f 7020 2d20 3133 3a35 393a 3236 2075 top - 13:59:26 u
00000010: 7020 3138 206d 696e 2c20 2031 2075 7365 p 18 min, 1 use
00000020: 722c 2020 6c6f 6164 2061 7665 7261 6765 r, load average
00000030: 3a20 302e 3134 2c20 302e 3036 2c20 302e : 0.14, 0.06, 0.
00000040: 3037 0a 07.
$
| Cursor disappears when running `top -n1 | head` |
1,352,615,133,000 |
The default X11 cursors are quite tiny when the display is a 4k screen. How can I use bigger cursors? Requirements:
Must work under plain X11 (no KDE, Gnome or similar bloat)
Should have at least a bigger root window cursor, i.e "arrow"
Should work on FreeBSD
I have looked at the Xcursor(3) manual page which talks about the ~/.icons directory but I am unsure which files to place there and how to activate them. I have a bunch of directories on the system, such as
/usr/local/share/icons/oxygen/64x64
/usr/local/share/icons/oxygen/64x64/categories
/usr/local/share/icons/oxygen/64x64/apps
/usr/local/share/icons/oxygen/64x64/devices
/usr/local/share/icons/oxygen/64x64/emotes
/usr/local/share/icons/oxygen/64x64/mimetypes
/usr/local/share/icons/oxygen/64x64/emblems
/usr/local/share/icons/oxygen/64x64/actions
/usr/local/share/icons/oxygen/64x64/places
/usr/local/share/icons/oxygen/64x64/status
/usr/local/share/icons/oxygen/48x48
/usr/local/share/icons/oxygen/48x48/emotes
/usr/local/share/icons/oxygen/48x48/devices
/usr/local/share/icons/oxygen/48x48/apps
/usr/local/share/icons/oxygen/48x48/mimetypes
/usr/local/share/icons/oxygen/48x48/status
/usr/local/share/icons/oxygen/48x48/emblems
/usr/local/share/icons/oxygen/48x48/actions
/usr/local/share/icons/oxygen/48x48/places
/usr/local/share/icons/oxygen/48x48/categories
/usr/local/share/icons/oxygen/48x48/animations
each of which containing a large number of icons as *.png files.
|
First off you don't need to remove, or prevent updates to, the old cursor.pcf file.
Next if your system already has the cursor.pfa properly installed in any existing fonts directory that your system is already using (i.e. it has a valid fonts.dir file and is already in the server font path), then you don't need to install any new files.
If you do need to install cursor.pfa, you can do so in any directory, including in a sub-directory of your home directory (as long as it's on the same system as where you run the Xserver, and is accessible by the Xserver process).
So once the scalable cursor font is somewhere in your font path you can put an alias for the cursor font name in any fonts.alias file on your system, including one in a new sub-directory of your home directory.
Lastly you need to include the real resolution of your screen in the alias specification! This isn't critical, but it is helpful as it means your choice of cursor font point size will bear a meaningful relationship to the physical size of the pointer you see on your screen -- i.e. a 12pt cursor font will generate cursor pointers that appear to be 12 points tall on your screen!
I'm currently using Xquartz (on macOS of course) in full-screen mode with full display resolution with a 32-inch display (i.e. I start Xquartz by running xrandr -s 6016x3384. The cursor.pfa file is already installed by Xquartz and already mentioned in the existing fonts.dir file in the directory where it is installed, and can already be found by xlsfonts. Of course cursor.pcf is also already installed.
I happen to have a ~/.fonts directory where I've installed some other fonts I've downloaded, so I simply created fonts.alias file there as follows. Here you see I've chosen an 18pt cursor font for a screen with a pixel resolution of 218 pixels per inch:
$ cat .fonts/fonts.alias
cursor -xfree86-cursor-medium-r-normal--0-180-218-218-p-0-adobe-fontspecific
My font path already includes this directory so I need only rehash my font path to test:
$ xset q | sed -n /Font/,+1p
Font Path:
/Users/woods/.fonts/,/opt/X11/share/fonts/TTF/,/opt/X11/share/fonts/OTF/,/opt/X11/share/fonts/Type1/,/Library/Fonts/,/opt/X11/share/fonts/100dpi/:unscaled,/opt/X11/share/fonts/misc/:unscaled
$ fc-cache -r
$ xset fp rehash
Now both the bitmap and scalable "cursor" fonts are available, but the scalable one is first:
$ xlsfonts -Cl -fn cursor
DIR MIN MAX EXIST DFLT PROP ASC DESC NAME
--> 0 255 some 0 29 26 28 -xfree86-cursor-medium-r-normal--0-180-218-218-p-0-adobe-fontspecific
--> 0 153 all 0 9 16 17 cursor
Finally you need the following in your ~/.xinitrc (after your font path is reset) to reset the root-window cursor (as the Xserver will have started with the original tiny bitmap cursor):
xsetroot -cursor_name left_ptr
You can run that xsetroot command right away and move the cursor over the root window to see the effect. If you start any new program it will use the new scaled cursor(s) too. You'll need to restart your window manager and all running programs to have them take on the new scaled cursors, so it's easiest to logout and login again.
Interestingly due to the weird magic of macOS, I now have over-sized cursors back in the macOS world.
| Bigger X11 Cursors suitable for 4k screens |
1,352,615,133,000 |
I am of course referring to the 'screenmate' cat, Neko - looks like this:
A few years ago, I remember Neko could follow my mouse cursor around, and was a helpful distraction from during long hours of work. I am reminded of this after finding this extension for XPenguins:
And in my short quest, I have found:
http://users.frii.com/suzannem/neko/index.html#links
http://sourceforge.net/projects/ungattino/?source=directory
The first does work, but within a Window in Wine, as it is a exe:
The second seems to start and work after running it with java -jar..., but seems to crash or disapear sometimes.
So where can I get Neko from for my computer - so that it can run around the screen like it used to? Ubuntu 12.04 & Fedora 19
Here are screenshots from after it was solved:
|
Oneko. Ubuntu, Fedora. Hasn't been updated since the last millennium so it's got to be the one you remember (also the image matches).
| Have you seen this cat? |
1,352,615,133,000 |
I recently installed Antergos with Xfce. However, the default cursor is massive when it's over a window that isn't the OS itself. So, if I'm within the file manager everything looks normal. With Chrome/VSCode/Terminator the cursor is drastically bigger.
Within my appearance settings it is set to 16 which is the lowest.
Any ideas?
|
It seems like you have different cursor configurations for your Desktop Environment (Xfce) and your X server (in this setup: your desktop).
As Xfce relies on GTK, it'll store it's settings to the GTK settings. There are some other apps that don't read this and need to be configured.
You can create a ~/.Xresources file and inser settings regarding those apps. For example setting your cursor:
!Xcursor.theme: cursor-theme
Xcursor.size: 16
(Lines beginning with an ! are comments.)
After that, you need to load this configuration by typing ...
$ xrdb ~/.Xresources
This requires xorg-xrdb to be installed.
Note, that not all apps may apply these changes immediately, so you need to restart these apps. (Or if it doesn't change for the desktop, try restarting X.)
You can find more info about X resources and settings in the Glorious Arch Wiki page.
If you want to experiment a little bit with X resources, I recommend you installing Urxvt terminal, which is a very good terminal itself and gets its configuration from the X resources. How to change stuff is documented in the linked article. To test changes, $ xrdb ~/.Xresources it, close and open terminal again to see effects.
| Large cursor XFCE |
1,352,615,133,000 |
When I try to search for this problem I get many results, but most of them seem to be about mouse cursor themes, and I haven't played with that, and can't see how that could explain the symptoms I see.
When the mouse cursor is over a window from thunderbird, firefox or a (group) chat from pidgin, the mouse cursor is 2-4 times the usual size, that it has when over windows with xterm, liferea, pavucontrol, audacious or the friend list from pidgin (I think that's everything I have running right now). The exception being if pidgins task bar menu is open, then the cursor is the usual (small) size, no matter which window the cursor is in.
I use i3 as window manager with no desktop manager on debian Stretch (but I only upgraded a couple of days ago, and also saw the problem on Jessie).
Any explanation (and cure) or just hints to how I find out what is wrong.
|
I just discovered that the problem is actually not window-related, but widget(?)-related, the huge cursor doesn't occur when it's over pidgin chat windows (as I wrote), but only when it's over one of the text fields (which takes up most of those windows, which probably explains why I didn't realise that before), either the one I can use to type messages into or the one where messages appear.
The mouse cursor is actually also "small" (sanely sized) when over the title bar of windows. In addition I can add Chromium and Spotify (but I think I read somewhere that new versions of spotify is mostly a camouflaged chromium) is also on the list of programs that cause the huge cursor.
That all made me think that maybe this is might be due to a question of (a poor) mouse cursor theme and a fallback (that looks better) when the mouse is over some kind of window that doesn't cooperate with the parts of gnome that still infects my systems to produce the huge mouse cursor, which also explains why bvx89 reported in a comment that he saw this on a new monitor. That made me search the net for "linux mouse cursor DPI", which made me find Cursor is huge on Ubuntu due to high resolution monitor, which actually seems to be the same problem, reversed (he reports a huge cursor over "system-dependant things" and not over firefox). But his solution (i.e. selecting a cursor/icon-theme with only one size - which is non-standard but you can download https://github.com/codecat/dmzblack-96dpi that he refers to) worked for me, now my mouse cursor has the same (sane) size everywhere.
| Huge mouse cursor in some windows |
1,352,615,133,000 |
So I'm writing a terminal emulation (I know, I should just compile putty, etc.) and am at the stage of plodding through vttest to make sure it's right. I'm basing it on the VT102 for now but will add later terminal features such as color when the basics are working right.
The command set is mostly ANSI. DEC had their own command set but supported the ANSI commands from about 1973. Those ANSI standards are apparently not available now but the ECMA equivalents are, I have them (ECMA-48 seems most relevant) but does not answer this question as far as I can see. Most ANSI command sequences start with ESC. Many commands start with the command sequence identifier shown here as CSI and represented in the data as 0x1c 0x5b (ESC [), or 0xdb if 8-bit communication was possible. Then followed a sequence identifying the command. Some commands affect cursor position, some the screen, some provoke a response to the host and so on.
Some terminal commands include a numeric argument. Example CSI 10 ; 5 H means make the cursor position row 10, column 5. When the numeric argument is missing there is a default value to use: CSI 10 ; H means make the cursor position row 10, column 1 because 1 is the default value when an argument is not given.
I have the vt102 manual from vt100.net (great resource) and about a dozen pages giving partial information on these command sequences. Apparently the complete gospel DEC terminal spec never made it out of DEC.
What is clear is that CSI C is move cursor right and that the default value is 1.
What isn't clear is what is the meaning of CSI 0 C.
Why have a zero there, it would seem to make the command do nothing? If it means "use default value" then it could have been sent as 1 instead, but the shorter string would have no argument and rely on the default value being interpreted as 1 anyway. These actual physical VT terminals were often used at 300 baud and below so the one character did matter!
I'm not so advanced with vttest that I can just try it both ways and see which makes everything perfect but I'm far enough that little questions like this are starting to matter.
|
I got in touch with Thomas Dickey (invisible-island.net) who maintains xterm and vttest - he explained that CSI 0 C is the same as CSI 1 C or CSI C in xterm.
For anyone looking for more information on terminal programming I highly recommend checking out the xterm source he hosts - specifically the ctlseqs.txt inside xterm, which looks very much like the one true terminal control sequences reference I've been searching for.
| DEC ANSI command sequence questions; cursor movement |
1,352,615,133,000 |
Consider the file tmp.txt whose contents are:
x
abcd
I want to open it in VIM and move cursor to the c character.
So I run VIM with the arguments:
$ vim -c "/c" tmp.txt
But it sets the cursor on a. It looks like VIM was able to find c but placed the cursor at the line begin. Why does it work different if I execute /c in VIM normal mode when file is open?
|
You can position the cursor on the first match using the -s (script) option. According to the vim manual:
-s {scriptin}
The script file {scriptin} is read. The characters in the file are interpreted as if you had typed them. The same can be done with the command ":source! {scriptin}". If the end of the file is reached before the editor exits, further characters are read from the keyboard.
You could use a temporary file with the keystrokes, or even (if you are using bash) process substitution. For example:
#!/bin/bash
vim -s <(printf '/c\n') tmp.txt
This approach works with more complicated searches than a single character.
| How to open a file in VIM and move cursor to the search result |
1,352,615,133,000 |
My screen is cracked, and the touchscreen makes the courser spasm every once and a while. Is there any way i can fully disable it?
As requested:
Module Size Used by
ctr 13023 2
ccm 17587 2
rfcomm 57995 0
bnep 17432 2
bluetooth 386513 10 bnep,rfcomm
6lowpan_iphc 16588 1 bluetooth
binfmt_misc 16917 1
loop 26525 0
rtsx_usb_ms 16899 0
memstick 13654 1 rtsx_usb_ms
uvcvideo 78997 0
videobuf2_vmalloc 12816 1 uvcvideo
videobuf2_memops 12519 1 videobuf2_vmalloc
videobuf2_core 47704 1 uvcvideo
v4l2_common 12995 1 videobuf2_core
videodev 130540 3 uvcvideo,v4l2_common,videobuf2_core
media 18305 2 uvcvideo,videodev
hid_multitouch 17057 0
snd_hda_codec_hdmi 45134 1
snd_hda_codec_realtek 62994 1
snd_hda_codec_generic 63154 1 snd_hda_codec_realtek
joydev 17063 0
x86_pkg_temp_thermal 12951 0
intel_powerclamp 17122 0
intel_rapl 17344 0
snd_hda_intel 26327 3
mei_me 17893 0
snd_hda_controller 26631 1 snd_hda_intel
coretemp 12820 0
arc4 12501 2
mei 74977 1 mei_me
snd_hda_codec 108219 5 snd_hda_codec_realtek,snd_hda_codec_hdmi,snd_hda_codec_generic,snd_hda_intel,snd_hda_controller
rtl8188ee 82986 0
i915 865455 3
rtl_pci 25944 1 rtl8188ee
kvm_intel 138825 0
rtlwifi 54679 2 rtl_pci,rtl8188ee
mac80211 502208 3 rtl_pci,rtlwifi,rtl8188ee
cfg80211 438375 2 mac80211,rtlwifi
snd_hwdep 17205 1 snd_hda_codec
psmouse 98914 0
snd_pcm 88538 4 snd_hda_codec_hdmi,snd_hda_codec,snd_hda_intel,snd_hda_controller
serio_raw 12849 0
pcspkr 12595 0
toshiba_acpi 27036 0
sparse_keymap 12760 1 toshiba_acpi
rfkill 18860 6 cfg80211,toshiba_acpi,bluetooth
kvm 404853 1 kvm_intel
drm_kms_helper 49151 1 i915
drm 253663 5 i915,drm_kms_helper
toshiba_bluetooth 12641 0
crct10dif_pclmul 13348 0
iTCO_wdt 12831 0
iTCO_vendor_support 12649 1 iTCO_wdt
i2c_algo_bit 12751 1 i915
lpc_ich 20768 0
i2c_i801 16964 0
crc32_pclmul 12915 0
i2c_core 50108 7 drm,i915,i2c_i801,drm_kms_helper,i2c_algo_bit,v4l2_common,videodev
ghash_clmulni_intel 12978 0
evdev 17445 17
video 17991 1 i915
wmi 17339 1 toshiba_acpi
cryptd 18613 1 ghash_clmulni_intel
snd_seq 57061 0
snd_seq_device 13132 1 snd_seq
snd_timer 26768 2 snd_pcm,snd_seq
snd 69285 16 snd_hda_codec_realtek,snd_hwdep,snd_timer,snd_hda_codec_hdmi,snd_pcm,snd_seq,snd_hda_codec_generic,snd_hda_codec,snd_hda_intel,snd_seq_device
soundcore 13026 2 snd,snd_hda_codec
ac 12715 0
battery 13356 0
processor 28159 0
button 12944 1 i915
ext4 489947 1
crc16 12343 2 ext4,bluetooth
mbcache 17171 1 ext4
jbd2 82399 1 ext4
sg 29973 0
sd_mod 44302 3
crc_t10dif 12431 1 sd_mod
crct10dif_common 12356 2 crct10dif_pclmul,crc_t10dif
sr_mod 21903 0
cdrom 51680 1 sr_mod
rtsx_usb_sdmmc 21184 0
mmc_core 106257 1 rtsx_usb_sdmmc
rtsx_usb 17487 2 rtsx_usb_sdmmc,rtsx_usb_ms
mfd_core 12601 2 lpc_ich,rtsx_usb
hid_generic 12393 0
usbhid 48607 0
hid 102250 3 hid_multitouch,hid_generic,usbhid
ata_generic 12490 0
thermal 17559 0
crc32c_intel 21809 0
xhci_hcd 152894 0
ehci_pci 12512 0
ehci_hcd 69635 1 ehci_pci
ata_piix 33638 2
alx 36121 0
mdio 12599 1 alx
fan 12681 0
thermal_sys 27546 6 fan,video,intel_powerclamp,thermal,processor,x86_pkg_temp_thermal
usbcore 199395 6 uvcvideo,rtsx_usb,ehci_hcd,ehci_pci,usbhid,xhci_hcd
usb_common 12440 1 usbcore
libata 181416 2 ata_generic,ata_piix
scsi_mod 195196 4 sg,libata,sd_mod,sr_mod
|
It looks like hid_multitouch might be your driver.
Before blacklisting, try the following:
modprobe -r hid_multitouch
If this works then add it to the blacklist
| How to disable my touch screen |
1,352,615,133,000 |
in school we have been assigned a homework in which we are suppose to print an ascii art into a terminal window. A input is data in format [x_coordinate, y_coordinate, char_ascii_value] (there is no data for coordinates where shouldn't be print any character). I don't have any trouble actually doing it but I guess I am simply too lazy to go into for cycle and print an empty space every time there is no data for character, then go to another line in terminal and do the same, etc.
So I was thinking that, there must be an easier way! Since we are allowed to work only with commands which are in POSIX, is there any command that allows you to move cursor to specific position in terminal?
I ran into the command named tput and tput cup does exactly what I need but I am not quite sure if tput cup is in POSIX.
P.S. Please don't take this like some kind of cheating. I am just trying to find a way to make my life easier instead of brainless writing code.
|
As mikeserv explains, POSIX doesn't specify tput cup. POSIX does specify tput but only minimally. That said, tput cup is widely supported!
The standardised way of positioning the cursor is using ANSI escape sequences. To position the cursor you'd use something like
printf "\33[%d;%dH%s" "$Y" "$X" "$CHAR"
which will print $CHAR at line $Y and column $X. A more complete solution would be
printf "\337\33[%d;%dH%s\338" "$Y" "$X" "$CHAR"
which will restore the cursor position.
| Posix command that moves cursor to specific position in terminal window |
1,352,615,133,000 |
Can you configure the cursor in Eclipse to be a (possibly non-blinking) block, instead of a (blinking) bar?
I am running Xfce 4.10.
|
Short answer: no
Long:
From comments, OP clarified that the question was about Eclipse. The clue that the question was about the application's cursor (displayed as a part of the graphics within the window) rather than the desktop cursor was the comment about the blinking bar. Desktop cursor themes do not blink, and rarely are just a bar.
If Eclipse supported a change of cursor shape, that would be in one of the Preferences tabs, e.g., for the editor. OP does not find it there.
Web searches for the cursor shape in Eclipse only find comments that the shape is determined by whether you are in insert- or replace-mode. Seeing that, it appears that Eclipse does not allow this feature to be user-customized.
In a check with OSX, I see a feature in
General
Editors
Text Editors
Accessibility
as Use Custom Caret and Enable thick caret, which are both checked by default. But there is no check-box for blink.
| Block cursor for Eclipse |
1,352,615,133,000 |
Xcursor is a format for the graphics of the cursor in X11 (file reports X11 cursor).
xcursorgen allows you to convert PNG files and some metadata to Xcursor files.
How do I convert an Xcursor file to PNG images?
Imagemagick's convert unfortunately returns:
no decode delegate for this image format
|
Use xcur2png.
Xcur2png takes PNG images from Xcursor-file, and generate config-file
which is reusable by xcursorgen. To put it simply, it is converter
from X cursor to PNG image.
Basic usage
xcur2png cursorfile
Converting all cursor files of a theme
find . ! -name '*.*' -type f -exec xcur2png {} \;
Availability
There are only inofficial packages for Arch Linux and Ubuntu (64bit) but the app is hassle-free to compile.
This answer would not have been possible without don_crissti pointing out the tool.
| Convert Xcursor to PNG |
1,352,615,133,000 |
I've changed my .Xdefaults for a black background, white foreground, but my cursor is now opaque. I can't see the letter I'm over, and worse I can't see my screen hardstatus.
Google is just pulling up how to make the whole term transparent.
How can I make my cursor transparent again?
$ cat .Xdefaults
URxvt*transparent: true
URxvt*tintColor: Black
URxvt*shading: 110
URxvt*saveLines: 60000
URxvt*foreground: White
URxvt*font: xft:Droid Sans Mono:pixelsize=14
URxvt*scrollBar: false
# Clickable links
URxvt*perl-ext-common: default,matcher
URxvt*urlLauncher: firefox
URxvt*matcher.button: 1
|
With that colour scheme, you could define a cursor colour that provided sufficient contrast, so that it would be readily visible in the window and also transparent enough to highlight the letter it is over.
Try: URxvt.cursorColor: #666
One other thing you might want to change: comments in this file are a ! not a # - it could save you from some grief further down the track...
| urxvt transparent cursor |
1,352,615,133,000 |
Saving and restoring the cursor position should be possible with simple ANSI escape sequences
ANSI escape sequences allow you to move the cursor around the screen at will. This is more useful for full screen user interfaces generated by shell scripts, but can also be used in prompts. The movement escape sequences are as follows:
[...]
Save cursor position: \033[s
Restore cursor position: \033[u
Source: Bash Prompt HOWTO: Cursor movement
However, it seems that this ANSI sequences restore only the horizontal position of the cursor. For example:
$ printf 'Doing some task...\e[s\n\nMore text\n\e[udone!\n\n\n'
Doing some task...
More text
done!
$
where the done! is horizontally at the correct position but not vertically (correct in the sense of restored).
Am I missing something, i.e. can you reproduce this?!
Is this the intended desired behaviour? If so, how would I get the done! printed after the task...?
If this should not happen, might this behaviour be triggered indirectly by something in my environment?
I searched and read the many questions about, but I did not find anything about this behaviour I experienced. Actually, the same occur with tput via
$ printf 'Doing some task...'; tput sc; printf '\n\nMore text\n'; tput rc; printf 'done!\n\n\n'
|
Am I missing something, i.e. can you reproduce this?!
I can, if I'm at the bottom of the terminal and the next line makes the content move up. But repeat the test in a terminal that doesn't scroll in the meantime. Hit Ctrl+L (or invoke clear) and start from the top. Then it behaves as you wish.
Is this the intended desired behaviour?
I think so. Cursor position is relative to the screen, not to its content.
How would I get the done! printed after the task...?
Possible approach: If you know you're going to print no more than 6 lines and the terminal is big enough, print 6 empty lines first so it scrolls first, then move the cursor up and only then print the meaningful text:
printf '\n\n\n\n\n\n'; printf '\033[6A'; printf 'Doing some task...\e[s\n\nMore text\n\e[udone!\n\n\n'
I used three separate printfs to show the logic, but it could be one.
| Restore cursor position after having saved it |
1,352,615,133,000 |
Below is a screenshot from Chromium browser, running in Razor-Qt desktop,
In other DEs, the cursor is just normal, but here you can see it becomes a big X, anyone know how to fix that? The cursor theme is not broken, as it works for KDE4 and XFCE4
P.S That happens to all GTK apps, Qt app works fine
|
That is the default cursor. You can run:
xsetroot -cursor_name left_ptr
to set the pointer to the left arrow. Typically, this goes in your .xinitrc file.
| Mouse cursor became a big X |
1,352,615,133,000 |
I need to use some Alt and Shift cursor combinations in emacs and the KDE Konsole is interfering with them. It switches the tabs instead. How can I disable that behaviour?
It is hard to tell whether it is something to configure in the necessary terminal itself or it is something controlled at the level of the GUI.
|
This can partly be done in the Settings->Configure Shortcuts menu in Konsole.
As can be seen from the image the Shift Right key was for navigating to the next tab, and it has been disabled.
The Alt key may require different dialogs and there is a related question - How can I remap the shortcut keys for scroll down/up in gnome terminal which also gives some insight into the process
| How can KDE Konsole be prevented from trapping the Ctrl/Alt/Shift + Cursor keys? |
1,352,615,133,000 |
I have switched to Emacs as my editor and when using multiple windows in the terminal I want the cursor to flash in the current window.
I have run the (blink-cursor-mode t) command and nothing happens. I have also tried a few commands to get the cursor to flash in the terminal and nothing happens.
|
It's a konsole setting: Settings → Edit Current Profile… → Advanced, checkbox "Blinking cursor". I had to exit and restart konsole before it took effect here.
| How can I get a blinking cursor in KDE Konsole? |
1,352,615,133,000 |
The pointer moves too fast (demonstration video) when I use the tablet in relative mode after I run
xsetwacom set "Wacom Bamboo stylus" Mode Relative
However, xsetwacom --list parameters does not show an obvious setting to change this.
My mouse is set to accelerate rather a lot: xset m 4 1. Running xset m 0 0 makes it possible to slow down the cursor in general but I need to do this only for the tablet.
|
The solution is to scale the Area parameter. To get the current value:
$ xsetwacom get "Wacom Bamboo stylus" Area
0 0 14760 9225
The value might depend on my setup (two screens, 3840x1200 resolution) or on my tablet (Bamboo MTE-450) so it might be different in your case.
Multiply the values by the factor by which you want to slow down the pointer, for example to make the pointer 3 times as slow, run
$ xsetwacom set "Wacom Bamboo stylus" Area 0 0 44280 27675
To set the value when Xorg starts, you can write the command into ~/.xinitrc
Note: it seems that until recently, BottomX and BottomY would have to be changed for this effect.
This solution was found thanks to an Arch Linux forums thread, which used BottomX but made it clear how to calculate and what the xsetwacom commands should look like.
| How to make a Wacom tablet stylus slower (less sensitive) |
1,352,615,133,000 |
When I move my mouse slowly over the desktop the pointer jumps often a few pixels (one or two) in the opposite direction of which I move my mouse. Horribly when trying to set the cursor around some semicolons in eclipse. I guess this is the result of a wrong set resolution. I suppose this is because the mouse was set initially really fast and even if I do xset m 1/2 3, the mouse is just to fast and unprecise for me.
It aready tried to configure the xorg.conf like this:
Section "InputDevice"
Identifier "Configured Mouse"
Driver "mouse"
Option "Device" "/dev/mouse"
Option "Protocol" "Auto"
Option "Name" "Logitech G3"
Option "Resolution" "2000"
EndSection
But with no effect.
EDIT
But one new thing I recognized is that, in the mouse settings, I can slide the slider to max or to min. Mouse behaviour (sensitivity) does not change. I found somthing curoious too in /var/log/Xorg.0.log:
[ 257.409] (II) config/udev: Adding input device Logitech USB Gaming Mouse (/dev/input/event1)
[ 257.409] (**) Logitech USB Gaming Mouse: Applying InputClass "evdev pointer catchall"
[ 257.409] (II) Using input driver 'evdev' for 'Logitech USB Gaming Mouse'
[ 257.409] (**) Logitech USB Gaming Mouse: always reports core events
[ 257.409] (**) evdev: Logitech USB Gaming Mouse: Device: "/dev/input/event1"
[ 257.409] (--) evdev: Logitech USB Gaming Mouse: Vendor 0x46d Product 0xc042
[ 257.409] (--) evdev: Logitech USB Gaming Mouse: Found 20 mouse buttons
[ 257.409] (--) evdev: Logitech USB Gaming Mouse: Found scroll wheel(s)
[ 257.409] (--) evdev: Logitech USB Gaming Mouse: Found relative axes
[ 257.409] (--) evdev: Logitech USB Gaming Mouse: Found x and y relative axes
[ 257.409] (II) evdev: Logitech USB Gaming Mouse: Configuring as mouse
[ 257.409] (II) evdev: Logitech USB Gaming Mouse: Adding scrollwheel support
[ 257.409] (**) evdev: Logitech USB Gaming Mouse: YAxisMapping: buttons 4 and 5
[ 257.409] (**) evdev: Logitech USB Gaming Mouse: EmulateWheelButton: 4, EmulateWheelInertia: 10, EmulateWheelTimeout: 200
[ 257.409] (**) Option "config_info" "udev:/sys/devices/pci0000:00/0000:00:1a.1/usb3/3-1/3-1:1.0/input/input1/event1"
[ 257.409] (II) XINPUT: Adding extended input device "Logitech USB Gaming Mouse" (type: MOUSE, id 8)
[ 257.409] (II) evdev: Logitech USB Gaming Mouse: initialized for relative axes.
[ 257.409] (**) Logitech USB Gaming Mouse: (accel) keeping acceleration scheme 1
[ 257.409] (**) Logitech USB Gaming Mouse: (accel) acceleration profile 0
[ 257.409] (**) Logitech USB Gaming Mouse: (accel) acceleration factor: 2.000
[ 257.409] (**) Logitech USB Gaming Mouse: (accel) acceleration threshold: 4
[ 257.409] (II) config/udev: Adding input device Logitech USB Gaming Mouse (/dev/input/mouse0)
[ 257.409] (II) No input driver specified, ignoring this device.
[ 257.409] (II) This device may have been added with another device file.
Still my question is:
How do I setup my mouse correctly in Debian wheezy?
|
Okay well that took a while. But I got a solution. Meanhile I even bought a new mouse.
When you have a mouse with a high dpi you can use its standard dpi with minimum acceleration (which is anyway going to be to fast) follow these steps:
Get xinput
$ sudo apt-get install xinput
List your input devices
xinput --list
You should get an output like this:
⎡ Virtual core pointer id=2 [master pointer (3)]
⎜ ↳ Virtual core XTEST pointer id=4 [slave pointer (2)]
⎜ ↳ HID 1d57:0005 id=8 [slave pointer (2)]
⎣ Virtual core keyboard id=3 [master keyboard (2)]
↳ Virtual core XTEST keyboard id=5 [slave keyboard (3)]
↳ Power Button id=6 [slave keyboard (3)]
↳ Power Button id=7 [slave keyboard (3)]
↳ AT Translated Set 2 keyboard id=10 [slave keyboard (3)]
I my case my "HAMA uRAGE" is HID 1d57:0005. Remember its id.
Now comes the magic. I would prefer to be able to increase the resolution but debian obv dont want me to. Type in:
xinput set-float-prop <id> 'Device Accel Constant Deceleration' <d>;
where is to be replaced by your mouse's id and the deceleration factor. Your have to play around a little bit. Like me. At least X doeas not need a restart for applynig the changes. Greets
EDIT:
To make it permanent edit X11 settings.
sudo nano /etc/X11/xorg.conf
Add: Option "ConstantDeceleration" "10"
Example:
Section "InputClass"
Identifier "My mouse"
MatchIsPointer "true"
Option "ConstantDeceleration" "10"
EndSection
But if you often change your system an want to have some kind of portable config, add xinput to your .xinitrc.
Mine is
xinput --set-prop "HID 1d57:0005" "Device Accel Constant Deceleration" 2
| Weird mouse behaviour. Mouse too fast |
1,352,615,133,000 |
I want to change the shape of the cursor in my various (emulated) terminals.
The shape that I want is ⼕ (sorry if it doesn't render). It's a three sided box that opens to the right. This way I could see where insertions are, and also see which character the cursor is "on". I found that character in Unicode at U+2F15.
I definitely want to be able to do this both in the kernel virtual terminals of Linux (i.e. what one gets with Control+Alt+FN) and in GUI terminal emulators like (say) XTerm and RXVT. If possible, I want to do this even in network terminals like (say) PuTTY and KiTTY. If I cannot get that exact character, I'd like at least that three-line shape.
I understand that this will involve either "ANSI" escape codes or (perhaps) settings in the terminal emulator (although given that that will not apply to the Linux built-in terminal emulator this is not preferable). Please provide an answer that is not dependent from using any particular shell. Is this even possible without altering the kernel code?
|
The shapes for cursors that are available in virtual terminals and real terminals are limited.
Generally, they only enable setting shapes that match old display hardware, which usually only permitted specifying a blink cycle and a starting and an ending scanline for when the cursor was gated on, sometimes only a very limited subset of start+end combinations (e.g. underline, overline, half-height, block).
The two major control sequences for this are DECSCUSR and LINUXSCUSR.
DECSCUSR is DEC's name for the control sequence that DEC supported in its later range of terminals.
Like other manufacturers of real terminals, in its doco DEC gave its vendor-private control sequences names that began "DEC".
(In its doco, Tektronix used the "TEK" prefix for naming its vendor-private control sequences, for comparison.)
The Linux doco is quite poor, as usual, and doesn't name stuff.
So "LINUXCUSR" is my coinage, with a "LINUX" prefix by analogy.
Neither DECSCUSR nor LINUXSCUSR are standardized.
They are different from each other, but they were invented at roughly the same time (only appearing for the DEC VT 5xx in the 1990s) so there wasn't the usual years of DEC prior art. ☺
Egmont Koblinger has commented elsewhere that the model of both is underwhelming, as it conflates blinking with shape.
There also has been some discussion of changing the meaning of DECSCUSR 0 to enable user-specified shapes.
And Microsoft Terminal has highlighted the mismatch between the DECSCUSR model and the model used in the Win32 console mechanism, which has allowed arbitrary start lines for three decades (four decades if one accounts for its predecessors in the VIO subsystem of OS/2 1.x and the PC/AT video firmware).
The upshot is that there isn't a single control sequence that will work universally, the world currently dividing into DECSCUSR and LINUXSCUSR camps, because almost no terminal emulator supports both.
Moreover, with these two you do not have anywhere near the flexibility that you want.
The only widespread deviation from the start+end scanline model is a vertical bar, and that you only get with some GUI terminal emulators (e.g. XTerm), which have added one additional shape as DECSCUSR 5 and DECSCUSR 6.
Yes, you could modify the code of the FreeBSD kernel, NetBSD kernel, OpenBSD kernel, and Linux built-in terminal emulators, and of the various application-mode terminal emulators (framebuffer and X11 GUI) to do more cursor shapes.
It would be quite tough to make it universal, though.
I've done this in my terminal emulator. DECSCUSR 7/8 are an outline box. DECSCUSR 9/10 are a star. DECSCUSR 11/12 are underline+overline. DECSCUSR 13/14 are a reversed "L" shape. I've been thinking, based upon reading some old 1970s literature, of adding two orientations of square brackets and overline-only. But DECSCUSR does not readily lend itself to the sort of arbitrary specification of actual Unicode characters that you are looking for. LINUXSCUSR does not match that idea at all, moreover.
Further reading
Jonathan de Boyne Pollard (2019). console-terminal-emulator. nosh Guide. Softwares.
Jonathan de Boyne Pollard (2019). console-control-sequence. nosh Guide. Softwares.
CONSOLE_CURSOR_INFO structure. Microsoft.
VIOCURSORINFO. EDM/2.
Cay Weitzman (1974). Minicomputer Systems: Structure, Implementation, and Application. Prentice-Hall. ISBN 9780135842270.
"Cursor Appearance in the Linux Console". Linux Gazette. Issue 137. April 2007. ISSN 1934-371X.
Jan Tourlamain (2019-06-25). Consider supporting (extension to VT,) DECSCUSR "0" to restore cursor to user default to help vim/others. Microsoft Terminal issues #1604.
VT510 Video Terminal Programmer Information. EK-VT510-RM. November 1993. DEC.
VT520/VT525 Video Terminal Programmer Information. EK-VT520-RM. July 1994. DEC.
| How can I make a custom terminal cursor shape? |
1,593,554,170,000 |
I have a custom PC build (AMD Ryzen 3800x, ASUS TUF GAMING X570-PLUS mobo, Nvidia 1660 TI, 16GB ddr4 3200mhz RAM) running Linux Mint 20 (recently released), kernel 5.4.0-39-generic. While it works normally, the 1st TTY just has the LM splash screen spinning (like when it's booting) and the TTYs 1 to 6 are black screen with a blinking cursor.
Other answers I've seen report this issue with Nvidia proprietary drivers, which I do have (driver ver 440) but I'm unsure how to fix this. Booting with nomodeset as a kernel parameter does nothing and has no effect at all. Bear in mind I can freely switch between TTYs (while they all have a blinking cursor and a splash screen for TTY1) and get back to the GUI on TTY7 like normal. Is there a way to get the TTY's back up like it was before?
As far as I recall, running LM19.3 with the same Nvidia drivers keeps the TTYs working - but if there's a fix to using LM20 then I will be more than happy to use it.
|
After some messing around, I've found the fix for this situation. Removing splash from the kernel params lets me access the TTYs just like normal.
I noticed TTY1 had the splash screen constantly there so first removing quiet splash with success, and then just splash (which also worked) showed me that splash was indeed the issue. As previously mentioned I have used nomodeset without any help and some online said they could still use the TTYs, just without seeing the text. This was NOT the case for me, and I could not use any of the 6 TTYs.
Whether this is indeed NVIDIA driver related, I don't know. Hope this helps someone else though.
| Linux Mint 20 TTY 1-6 blinking cursor, GUI at tty7 works normally |
1,593,554,170,000 |
I know that X server can be started with --nocursor flag which hides mouse pointer, however I can't seem to find the same option for Wayland. I am running an electron app under (X)Wayland and mouse cursor is visible (can't hide it with css without moving the mouse - chromium bug). I have to add that the device is a tablet without any mouse attached to it, however mouse0 is listed in /dev/input/. Any thoughts?
Thanks.
|
The only way I found to solve this is to replace Adwaita icons. First I generated X11 cursor file with xcursorgen from transparent png file and then replaced all X11 files in /usr/share/icons/Adwaita/cursors/....
| Hiding cursor in Wayland |
1,593,554,170,000 |
I have Geany installed via apt-get on debian 8. I want to set the caret shape to be a block, but i can't find any filetypes.common in my .config/geany/filedefs. How can i do this?
|
The file ~/.config/geany/filedefs only exists if you have created it, either manually or by using the menu Tools / Configuration Files / filetypes.common to save your configuration.
Debian's package for geany-common has the default settings for this file in /usr/share/geany/filetypes.common
| Geany text editor caret block |
1,593,554,170,000 |
I am using Fedora 20, and whenever a new line opens in the command line terminal, the cursor, which is a solid black rectangle, flashes on and off about ten times, then remains steady. I think I have read somewhere that I can do something useful during the flashing period, but I have forgotten what it was, or where to find the reference again; or am I just imagining it?
Please can someone confirm or explain this?
In response to @sim's query about the terminal emulator:
[Harry@localhost ~]$ echo $TERM
xterm-256color
|
I can't find anything about "something useful" you can do during that time (though some random undocumented feature would not surprise me). However, it seems that this behavior is to "save energy" (by not having to wake up the GPU and redraw the screen for each blink).
See the related question, and the (rejected) GNOME bug.
| Why does terminal cursor flash briefly? |
1,593,554,170,000 |
xdotools gives the ability to insert text at the current text insertion point, but it doesn't seem(?) to have an option which gets the actual screen x,y co-ordinates... Is there any such program? ... the reason is: I want the mouse to jump to the current text insertion point.. ie to the text-cursor..
|
I'm pretty sure there's no way to do this in full generality, because text cursors are an application feature, not a server feature like mouse cursors. The application decides where to place input based on its internal data structures, and the text cursor is a way to tell the user what it's going to do with the input. As far as the X server is concerned, there's a focused window and that's it; the focused window does whatever it likes with the input.
Now I can't think of an application that actually has more than one text cursor, in an abstract UI sense (some have none, of course). But unless the application has an interface to tell others where the text cursor, or the text cursor is visually distinctive, I don't think you can get at it.
| Is there a program which gives the X screen co-ords of the Text-cursor (insertion point)? |
1,593,554,170,000 |
After I've installed proprietary FGLRX driver (version 1:14.9+ga14.201-2) from Debian 8 Jessie repositories, my cursor became invisible. All mouse actions are working correctly, just can't see the cursor.
I've got Lenovo E420 laptop with Intel/AMD hybrid graphics, discrete card is Radeon HD6630M.
After install, I've created /etc/X11/xorg.conf file with aticonfig --initial . FGLRX driver seems working correctly. I can even see cursor in OpenGL apps like games. But after return to desktop it's gone.
I'm using KDE, but this issue seems not to be dependent on environment - tried eg. JWM.
When I delete xorg.conf, I can see cursor again, but driver is not working.
Kernel version is 3.16.0-4-amd64.
$ lspci | grep VGA
00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09)
01:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Whistler [Radeon HD 6630M/6650M/6750M/7670M/7690M]
$ cat /etc/X11/xorg.conf
Section "ServerLayout"
Identifier "aticonfig Layout"
Screen 0 "aticonfig-Screen[0]-0" 0 0
EndSection
Section "Module"
EndSection
Section "Monitor"
Identifier "aticonfig-Monitor[0]-0"
Option "VendorName" "ATI Proprietary Driver"
Option "ModelName" "Generic Autodetecting Monitor"
Option "DPMS" "true"
EndSection
Section "Device"
Identifier "aticonfig-Device[0]-0"
Driver "fglrx"
BusID "PCI:1:0:0"
EndSection
Section "Screen"
Identifier "aticonfig-Screen[0]-0"
Device "aticonfig-Device[0]-0"
Monitor "aticonfig-Monitor[0]-0"
DefaultDepth 24
SubSection "Display"
Viewport 0 0
Depth 24
EndSubSection
EndSection
|
Installing fglrx-driver (1:15.7-3) from Debian 9 Stretch repository solves the problem.
| Invisible cursor after FGLRX install (Debian Jessie) |
1,593,554,170,000 |
I'm working on code for a serial terminal and I'm implementing the ANSI escape codes for moving around the cursor, clearing the screen, etc, and I am curious how to know which to use since there doesn't seem to be a clear stopping point for the codes.
I'm using https://www2.ccs.neu.edu/research/gpc/VonaUtils/vona/terminal/vtansi.htm as a reference
For example, if I receive the code,
I start reading characters, but if I get the value 75='K', that could be ESC[K = Erase End of Line, or a 75 as a count for a code like ESC[{COUNT=75}C for move cursor 75 columns right.
What if I was receiving the code to erase the line followed by a printed A? As far as I know the code for that and the cursor 75 cols right would receive the exact same sequence.
I'm probably missing something obvious but could someone please give me a hint? Thank you
|
For "ANSI" (actually ECMA-48), the characters which begin the control sequence, determine the set of final characters. It's documented near the beginning of ECMA-48 (section 5.4 is particularly pertinent, though you may need an ASCII chart to understand its terminology).
The parameter 75 in a control sequence would be the characters 75, rather than a character whose value happened to be 75. There's no confusion between the two.
The link you cited was for a document written by someone who was unfamiliar with the standard. It's mentioned in the ncurses FAQ How do I get color with VT100?.
| How to know the end of an ANSI control code? |
1,593,554,170,000 |
Often I want to skim through a document or piece of code, which I do with page down (Ctrl-D) and page up (Ctrl-U), but it feels like I am violating the spirit of Vim using emacs-like chords/control keys. Is there a non-control key method of skimming through a document?
|
Ctrl-D, Ctrl-U, Ctrl-F, Ctrl-B are pretty standard for this, but there are a few other ways I've found useful:
Ctrl-E and Ctrl-Y scroll one line down and one line up, respectively, without moving the cursor (unless it would be moved off the screen, of course). These are handy because they accept counts, i.e., 5Ctrl-E will "Expose" five more lines at the bottom of the screen.
zz (lowercase!) scrolls the text to place the line the cursor is on in the center of your screen (or window in gvim)
zt scrolls to place the current line at the top of your screen
zb scrolls to place the current line at the bottom of your screen
And H, M and L place the cursor respectively on the top, middle and bottom lines currently on the screen.
This means that Lzt scrolls down one page (minus one line) and Hzb scrolls up one page (minus one line), while Lzz and Hzz mirror pretty closely the behavior of Ctrl-D and Ctrl-U.
Although honestly, I usually just use Ctrl-D and Ctrl-U. :)
| Moving by page without chords in Vim |
1,593,554,170,000 |
I have a kiosk system with an X server running, hosting different graphical programs.
All programs are mutually exclusive as their systemd units conflict.
On some of those programs I want to use a native X11 cursor, such as tcross.
I can set it in the respective application's systemd unit via xsetroot.
Is it also possible to hide the cursor using xsetroot or another tool without restarting the X server?
Options I already excluded:
-nocursor parameter of the X server - this disables the cursor for all applications for its entire runtime
unclutter - I want the cursor to hide on the respective application during its entire runtime and not only when it is not moved.
[Unit]
Description=Plain X.org server
After=plymouth-quit-wait.service
[email protected] display-manager.service
[Service]
Type=simple
Environment=DISPLAY=:0
ExecStart=/usr/bin/Xorg vt7 -nolisten tcp -noreset -nocursor
# Wait for server to be ready and set kiosk configuration.
ExecStartPost=/usr/bin/kiosk
# Set chicken as cursor to be able to test touch screen
# and see whether X server is actually running.
ExecStartPost=/usr/bin/xsetroot -cursor_name tcross
Restart=on-failure
RestartSec=3
[Install]
WantedBy=graphical.target
|
If your X11 server has the XFIXES extension (seen in xdpyinfo), you can write a small C program to call XFixesHideCursor() on the root window to hide all cursors until the program ends. You will probably need to install some X11 development packages (like libXfixes-devel, but it depends on your distribution) to have the include file /usr/include/X11/extensions/Xfixes.h. Create a file nocursor.c to hold:
/* https://unix.stackexchange.com/a/726059/119298 */
/* compile with -lX11 -lXfixes */
/* https://www.x.org/releases/current/doc/fixesproto/fixesproto.txt */
#include <X11/Xlib.h>
#include <X11/extensions/Xfixes.h>
#include <stdlib.h>
#include <unistd.h>
int main(){
Display *display = XOpenDisplay(NULL);
if(display==0)exit(1);
int screen = DefaultScreen(display);
Window root = RootWindow(display, screen);
XFixesHideCursor(display, root);
XSync(display, True);
pause(); /* need to hold connection */
return 0;
}
and compile with gcc -o nocursor nocursor.c -lX11 -lXfixes.
Run ./nocursor in a suitable environment with DISPLAY set, and cursors
should not appear until you interrupt the program.
| Programmatically toggle cursor visibility on X server |
1,593,554,170,000 |
I've installed the big-cursor package in Ubuntu 20.04.3 LTS, but it doesn't show up on the update-alternatives list, and I can't find any instructions on how to use it.
I use the dwm window manager and the command line (i.e., no Gnome, etc.). How do I use the cursor theme?
|
Cursor themes are configured via environment variables (or X resource settings of the Xcursor library).
| How to use big-cursor in Ubuntu? |
1,593,554,170,000 |
In the Linux source code, specifically in linux/drivers/video/console/vgacon.c, there is a switch case block for cursor shapes. Each of these shapes are rectangles of the same width and varying heights. Clearly, Linux handles the height of the cursor, but does it handle the width? Does Linux choose the width, or does the GPU decide? Does this vary between the other *.cons, (some of which have switch cases of cursors)?
|
In vgacon, the hardware chooses the width, and it’s always the full width of a character cell — that’s all that VGA supports. mdacon is similar, for the same reason.
Other console implementations with cursor size handling can be found by looking for CUR_UNDERLINE. Some of them, such as fbcon, could theoretically support cursors of varying widths too, but they all match the behaviour of the original Linux console (the VGA one) and use a fixed width.
| What handles virtual console cursor specifics? |
1,593,554,170,000 |
I'm using KDE Plasma, and I would like to disable cursor blinking in Qt5 applications (KWrite for example, but not only) thanks to the .so file in this git repo*, as there is no checkbox "disable cursor blinking" in the config panel :(
I've added an export LD_PRELOAD=/full/path/to/qt5noblink.so in my .bashrc file, but it only works for apps I launch from my shell, not when I doubleclick on a file.
Is there any way that Plasma globally takes care of this export line? (without rewriting all my executables: this previous question is not really what I'm looking for...)
Thanks!
(*For tricks on cursor blinking for other desktop environment read this, it's gorgeous!)
|
.bashrc is only read when you run an interactive shell. It's the wrong place to set environment variables: as you've discovered they are only set in applications started through an interactive shell.
To set an environment variable for your whole session, on most systems, you can set it in ~/.profile. Since you're using KDE, a better place might be ~/.config/plasma-workspace/env/preload.sh. This way the variable would be set only if you log in under KDE, not if you select another GUI environment or if you log in in text mode (e.g. over the network). Beware, however, that setting LD_PRELOAD very broadly can be dangerous: the library will be loaded into every single program that you run, not just into programs using the Qt library. This one looks harmless enough though.
| Graphically start applications with a custom LD_PRELOAD? |
1,593,554,170,000 |
I have the problem that my cursor icon doesn't get drawn on the secondary monitor of a two monitor setup. It looks like this on the main screen:
But on the second screen it look like:
Interestingly, if I screenshot it with gnome-screenshot -p the cursor shows on the resulting picture (regardless of which monitor the window is on). This seems to imply that gnome knows what is there but fluxbox or the gpu driver doesn't.
I've tried using other cursor icons and while they show normally on the main monitor they do the same dashed line on the other screen. Happy to add any setting/conifg information would be helpful.
Edit 1
I was a little worried that it might be a driver issue and therefore not easily fixable. I have a "Sapphire Nitro r9 390" using the HDMI out on the 'bad' screen and displayport on the 'good' screen (ah, so maybe I could put both through with displayport. Worth a try anyway.). I'm using the proprietary "AMD Catalyst Linux Graphics Driver" fglrx 15.20.3 [Sep 8 2015]. The command lspci | grep VGA shows:
01:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Hawaii PRO [Radeon R9 290] (rev 80)
While the driver info fglrxinfo shows:
display: :0 screen: 0
OpenGL vendor string: Advanced Micro Devices, Inc.
OpenGL renderer string: AMD Radeon (TM) R9 390 Series
OpenGL version string: 4.5.13399 Compatibility Profile Context 15.201.1151
I'll try the DP for both and see if it helps.
Edit 2
I got another DP cable but it didn't seem to help. Mirroring the screens still has one with a normal cursor and one with the dots. I guess I'm stuck with it for the time being.
Edit 3
Updating to the latest Catalyst driver seems to have solved my problem, yay! The driver I used was the "Crimson Edition 15.12" dated 12/18/2015. The version info is now:
OpenGL vendor string: Advanced Micro Devices, Inc.
OpenGL renderer string: AMD Radeon (TM) R9 390 Series
OpenGL version string: 4.5.13416 Compatibility Profile Context 15.302
|
The OP has reported that
updating to the latest Catalyst driver seems to have solved the problem.
The version info is now:
OpenGL vendor string: Advanced Micro Devices, Inc.
OpenGL renderer string: AMD Radeon (TM) R9 390 Series
OpenGL version string: 4.5.13416 Compatibility Profile Context 15.302
| In Debian Fluxbox, how do I fix the cursor on the second head? [closed] |
1,593,554,170,000 |
i have rpi connected with hdmi on big tv few meters from my workplace and i want to change size of mouse pointer.
xsetroot -mouse .. .. will change just one type of cursor and not covering all situations.
How to load my custom image above all windows in any window manager, to follow mouse movements over all programs and xterms ...
Preferred is some tool that is part of X-org, but short script or C-source lines would be ok too.
... something like simplified-as-possible version of ONEKO game.
|
A bit late to the game, but just spotted this Large software cursor for screen recording on X11 repo and I think it does what you want. If you still care nearly 8 years later : D
| X Image following cursor mouse pointer |
1,593,554,170,000 |
In Bodhi Linux, there is a poor choice of mouse cursor - only from the MOKSHA desktop theme, is it possible to change the theme of the mouse cursor?
|
Install lxappearance
Install breeze-cursor-theme (find another one if you want, see below)
Open lxappearance (press AltEsc to open the quick launcher and start typing lxapp...)
Go to cursor tab
Choose the theme and apply
Reset Moksha desktop with CtrlAltEnd
Other themes include chameleon-cursor-theme,
dmz-cursor-theme,
moblin-cursor-theme,
oxygen-cursor-theme,
oxygen-cursor-theme-extra, and
xcursor-themes.
| How to change the mouse cursor in Bodhi Linux |
1,593,554,170,000 |
I am using GNOME with a 4k monitor and 200% scaling and everything seems to be working fine, but after logging out and logging back in, cursor scaling seems to reset to 100%. I am attaching the photo since taking a screenshot fixes cursor scaling back to normal. The cursor also goes back to normal if I change the display scaling to 100% and then back to 200%.
|
The problem was with Gnome Extension, not the Gnome itself.
The extension name is "Soft brightness" and it seems to be fixed now.
| Tiny cursor after logout |
1,593,554,170,000 |
Recently tried installing both latest ApricityOS and Fedora 25 (gnome versions). The cursor freezes in the top left corner. I can still click on things (if I guess correctly). Everything else seems to work fine. I've got an Nvida GTX1080 which I suspect is the problem. I've read that there are some issues with nouveau and the Pascal series, and issues with Wayland and Nvidia. Does anyone know how I can get either working?
|
I ran into the same issue, and found the solution here:
https://bugzilla.redhat.com/show_bug.cgi?id=1398764
To summarize:
First, you'll have to install Fedora using the keyboard. Slightly annoying, but not too hard.
Then, after you're installed and booted, run the following:
sudo dnf config-manager --add-repo=http://negativo17.org/repos/fedora-nvidia.repo
sudo dnf upgrade
sudo dnf install nvidia-driver nvidia-settings kernel-devel
Finally, restart, and you should have a mouse.
| Cursor Frozen During Install |
1,593,554,170,000 |
Condition: unstable caret-cursor and its position
Other complications: many enter-artifacts, much lost content because sudden selections of contents and overwrites, often ctrl+z does not work etc in Google Product Forums so much lost work time; my typing speed in Debian now: 10-30 WPM; normally, 80-95 WPM long-term with Dvorak
configurating Keyboard > Typing in Fig. 1 cannot solve the artifacts so there must be something internal going on; possible related to, touchpad/...
keybooard-layout independent - occurs in Qwerty and Dvorak
no firmware errors - dmesg |grep firmware returns no relevant errors/warnings
keyboard-independent - problems occur with an external keyboard too
I had to do about two corrections every sentence because of abnormal typing configuration. I think I am not touching the touchpad neither by accident. I think I have much more typing artifacts now than with Ubuntu 16.04.
Fig. 1 Options which I change to find optimal configuration but not sufficient and something internal must be going on
System characteristics
I purged already xserver-xorg-video-intel because many bugs so using modesetting and (2)
Backported Linux kernel (4.6) and Skylake CPU support and firmware installed (thread How Smooth is Upgrading Linux kernel in Debian 8.5?)
apt-get -t jessie-backports install linux-image-amd64
apt-get -t jessie-backports install firmware-misc-nonfree
Installed wifi firmware
apt-get -t jessie-backports install firmware-iwlwifi
Proposals
Some missing firmware because no firmware errors, severe problem and the setting Keyboard > Typing
Insufficient Skylake support in Linux kernel 4.6? I will exclude this thing by trying 4.7. - - I think CPU graphic can affect cursor's location, which stability seems to be the main problem here.
Hardware: Asus Zenbook UX303UA
Debian: 8.5 64 bit
Linux kernel: 4.6 backported
Keyboard layout: Qwerty, Dvorak, ...
|
Fig. 1 Disable touchpad while typing, Fig. 2 Keyboard > Typing settings, Fig. 3 Disable long-key presses, Fig. 4 Better long-term WPM by using Fig. 3's option
I think the problem is mainly caused by the over-sensitive touchpad, which causes the cursor's position change much. I did the following change i.e. disable touchpad, while typing, and it resolves the problem significantly in Fig. 1.
Then, there is the problem of having long-presses by accident when typing fast, so I think disabling Key presses repeat when key is held down is the best option here in Fig. 3, but Fig. 2 is also optional but there you should be able to calibrate long-presses individually, which is not possible by defaults, extended in the thread How to Have key-repeats of Arrow keys when disabled key-repeats? for arrow-keys.
How to Calibrate your Typing Speed?
I play play.typeracer.com, where I found one good condition with the settings in Fig. 2, but later I find Fig. 3's option better because long-presses cause much problems etc when making typing mistakes.
I can reach about 10% greater WPM by disabling long-presses in Fig. 4.
Other changes needed in my tying
Use to alleviate no-long-presses for removing a word and undoing, respectively, ctrl+backspace/z.
Use ctrl+arrow key in moving. It would be great to still have long-presses of arrow-keys although you have disabled the main option, because it is the mainly feature needed for repositioning the caret position, extended in the thread How to Have key-repeats of Arrow keys when disabled key-repeats?
Differential solutions
Is there a way to have it so that key repeats aren't disabled but only start after you've held a key down for a longer time? (Random832)
Conclusion
I still get some center-artifacts and unstable cursor position about 1/40 of times, so significantly less than two times per sentence. So it is likely that the cause of the bug/issue is still existing in the system.
| How to Calibrate Caret-Cursor's position when unstable Cursor in Debian? |
1,593,554,170,000 |
I am curious, is there a way to hide the cursor right before it will be placed at the top left corner of the terminal emulator? And do it independently of terminal emulator (not modifying the source code). Is it possible to use terminfo for such purpose? Or is there something similar to .xinitrc or .bashrc, but for terminals?
|
No, there is not.
Terminal emulators do the same thing as real terminals: from the reset state the cursor starts off visible, until a control sequence is received from the host saying otherwise. The doco of (some of) the terminals being emulated explicitly defines the reset state, including the initial cursor visibility state.
Further reading
"Cursor Movement and Panning". VT420 Programmer Reference Manual. EK-VT420-RM-002. February 1992. Digital.
"Table 5–9 Terminal's Default Settings". VT510 Video Terminal Programmer Information. EK-VT510-RM. November 1993. DEC.
| How to hide text cursor without shell? |
1,593,554,170,000 |
I started Kali Linux this morning and everything was like the days before; the cursor was there. Then I turned it off, started again and boom -> No cursor. I'm using Windows 8.1 and Kali Linux is running with VirtualBox.
|
Go to Settings -> System -> Pointing Device and set to ps2 mouse. Be sure that machine is powered off to change settings.
| Kali Linux: Cursor not showing |
1,593,554,170,000 |
Currently cursor always active and visible in st that even go over text or not, cursor's shape always like |, I'd to change to | only when go over text to ready to select, otherwise keep its normal pointer shape.
|
This is not possible in current st. The mouse cursor shape is set by the following line in config.def.h (and therefore config.h):
/*
* Default colour and shape of the mouse cursor
*/
static unsigned int mouseshape = XC_xterm;
...and never altered anywhere else in the code.
If you modified the above line in config.h (using another value from the X11 header file cursorfont.h: possible values, with example appearance), you would get the new cursor shape all of the time. To have it change dynamically, based on the contents of the terminal, you'd have to write the feature yourself -- and given the goals of the suckless project, it is unlikely that such a feature would ever appear in unpatched st.
| st terminal: only change cursor's shape when move over text |
1,593,554,170,000 |
Using i3wm
Using arch wiki https://wiki.archlinux.org/title/Cursor_themes#XDG_specification
$ ls .local/share/icons/
Bibata-Modern-Ice/
In ~/.icons/default/index.theme
[icon theme]
Inherits=Bibata-Modern-Ice
In ~/.config/gtk-3.0/settings.ini
[Settings]
gtk-cursor-theme-name=Bibata-Modern-Ice
Also used ln -s ~/.icons/default/cursors .local/share/icons/Bibata-Modern-Ice/cursors/, but it still doesn't work. It only works in firefox. Doesn't work in other areas of desktop.
LXAppearance also doesn't work.
|
I claim your order when creating the symlink is wrong. I just had the same problem but fixed it with:
cd ~/.icons/default
ln -s /usr/share/icons/Bibata-Modern-Classic/cursors/ cursors
| Not able to use cursor theme universally |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.