date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,483,481,088,000
I am running a benchmark to figure out the number of jobs I should allow GNU Make to use in order to have optimal compile time. To do so, I am compiling Glibc with make -j<N> with N an integer from 1 to 17. I did this 35 times per choice of N so far (35*17=595 times in total). I am also running it with GNU Time to determine the time spent and resources used by Make. When I was analyzing the resulting data, I noticed something a little peculiar. There is a very noticeable spike in the number of major page faults when I reach -j8. I should also note that 8 is the number of CPU cores (or number of hyper-threads to be more specific) on my computer. I can also notice the same thing, but less pronounced, in the number of voluntary context switches. To make sure my data wasn't biased or anything, I ran the test two more times and I still get the same result. I am running artix linux with linux kernel 5.15.12. What is the reason behind these spikes? EDIT: I've done the same experiment again on a 4 cores PC. And I can observe the same phenomenon, at the 4 jobs mark this time around. Also, notice the jump in major page faults in the 2 jobs mark. EDIT2: @FrédéricLoyer suggested comparing page faults with the efficiency (inverse of the elapsed time). Here is a box plot of exactly that: We can see that the efficiency is getting better as we go from 1 job to 4 jobs. But it stays basically the same for bigger numbers of jobs. I should also mention that my system has enough memory so that even with the maximum number of jobs, I do not run out of memory. I am also recording the PRSS (peak resident set size) and here is a box plot of it. We can see that the number of jobs doesn't impact memory usage at all. EDIT3: As MC68020 suggested, here are the plots for TLBS (Transaction Lookaside Buffer Shootdown) values for 4 cores and 8 cores systems, respectively:
Because your graph showing the global efficiency provides the correct answer to your quest, I'll try to focus on explanations. A/ EFFICIENCY \ JOBS PLACEMENT Theoretically, (Assuming all CPUs idle at make launch time and no other task running and no i job has already completed when launching the n-th > i), we may expect CFS to distribute the 1,2,3,4,5,6,7,8 jobs to CPU 0,2,4,6 (because no benefits from cache sharing) then 1,3,5,7 (still no benefits from cache sharing but because of cache being shared between siblings, increase of lock contention hence negative impact on global efficiency) Could this be enough to explain the lack of improvement of global efficiency starting from job 5 ? B/ PAGE FAULTS As explained by Frédéric Loyer, major page faults are expected at job launch time (due to the necessary read system calls). Your graph shows the increase is almost constant from 5 to 8 jobs. The significant increase at -j4 on your 4+4 core (corroborated by the significant increase at -j2 on your 2+2 core) appears to me more intriguing. Could this be the witness of the rescheduling of one job's thread on whatever > 4 cpu because of whatever sudden activity of some <=4 cpu caused by whatever other task ? The constant amount of page faults for -j(n>8) being explained by the fact that all cpus that can be elected have already the appropriate mapping. BTW : Just in order to justify my request for misc. mitigations info in OPs comments, I wanted to first make sure that all of your cores were fully operational. They appear to be.
Spike in number of page faults with make -j`nproc`
1,483,481,088,000
I would like recipes in GNU Make to use environment variables instead of command line arguments, to make them shorter, so that I can concentrate better on what changes in each command. For example, instead of seeing: g++ -g -I/path1 -I/path2 -DFLAG -Wall -c hello.cpp -o hello.o I would like to see something like: g++ -c hello.cpp -o hello.o where g++ would read include directories and everything else from environment variables. Therefore, instead of using this: compile.cxx = $(CXX) $(CXXFLAGS) $(CPPFLAGS) -c %.o: %.cpp $(compile.cxx) $< -o $@ I am using this: compile.cxx = CPATH="$(CPATH)" LIBRARY_PATH="$(LIBRARY_PATH)" bash -c $(CXX) -c %.o: %.cpp $(compile.cxx) $< -o $@ But I get this output: CPATH="irrelevant" LIBRARY_PATH="irrelevant" bash -c g++ -c hello.cpp -o hello.o g++: fatal error: no input files Hence, it seems that g++ does not receive its arguments. I am also open to alternatives that achieve a similar effect (that is: short recipes).
You might want to use a so-called compiler response file which is very common in MS toolchains. Apparently recent versions of GCC also support it (see the manual). CXX = g++ CPPFLAGS = -I/path1 -I/path2 -DFLAG CXXFLAGS = -g -Wall CXXOPTS = $(CURDIR)/cxx.opts %.o: %.cpp $(CXXOPTS) $(CXX) @$(CXXOPTS) -c -o $@ $< $(CXXOPTS): echo "$(CPPFLAGS) $(CXXFLAGS)" >$@ clean: rm -f *.o rm -f $(CXXOPTS) Sample session: $ touch a.cpp b.cpp c.cpp $ make a.o b.o c.o echo "-I/path1 -I/path2 -DFLAG -g -Wall" >cxx.opts g++ @cxx.opts -c -o a.o a.cpp g++ @cxx.opts -c -o b.o b.cpp g++ @cxx.opts -c -o c.o c.cpp $ make clean rm -f *.o rm -f cxx.opts
Using environment variables for shorter recipes in GNU Make
1,483,481,088,000
I'd like to use the shell assignment operator (i.e., !=) in a makefile that is going to be executed on FreeBSD, macOS, and Linux. Here's an example: a!= seq 3 .PHONY: all all: $a .PHONY: $a $a: @echo $@ Here's the expected output: $ touch 1 2 3 $ make all 1 2 3 Unfortunately, the shell assignment operator is not supported by the GNU Make shipped with macOS Monterey 12.6.1 and the output of the example is empty. It works in more recent versions of GNU Make though (e.g., 4.4), which are likely to be encountered in recent Linux distributions. What should I do if I want this makefile to work with any version of GNU make and bmake?
The solution is to use the $(shell ...) construct in addition to the shell assignment operator like this: a= $(shell seq 3) a!= seq 3 .PHONY: all all: $a .PHONY: $a $a: @echo $@ The GNU Make 3.81 seems to skip the shell assignment so it uses the output of the first assignment. bmake, however, does not really care about the first assignment because the second assignment overrides it anyway.
How do I use a shell assignment in a makefile so that it works with both FreeBSD make (bmake) and macOS make (GNU Make 3.81)?
1,483,481,088,000
I have these lines in my Makefile: PLATFORM = $(shell uname -r) OLD_FREEBSD = 7.3-RELEASE-p2 ifeq ($(OLD_FREEBSD), $(findstring $(OLD_FREEBSD),$(PLATFORM))) ... do some stuff ... else ... do some other stuff ... endif And this works as expected. But I figured out, that some of FreeBSD 7.3 images show output as 7.2-RELEASE-p2 for command "uname -r". I don't know why it behaves in this way, but I should cover this variant. Also, this Makefile (run with gmake) should be valid for newer FreeBSD and CentOS. So, what's the best way to verify that OS is either 7.3 or 7.2?
You can compare a string (a word) to several others with filter, which returns any that match the word. For example, ... OLDER_FREEBSD = 7.2-RELEASE-p2 M = $(filter $(PLATFORM),$(OLD_FREEBSD) $(OLDER_FREEBSD)) ifneq ($(M),) ...
Check FreeBSD version in Makefile
1,483,481,088,000
I have this rules in my GNU Makefile: FITXER = fitxa.md $(FITXER).html: $(FITXER) pandoc --from markdown --to html $(FITXER) -o $(FITXER).html $(FITXER).jpeg: $(FITXER).html wkhtmltoimage $(FITXER).html $(FITXER).jpeg Is there any way to apply this rules in a list of files: for example something like that (in pseudocode): for FITXER in [fitxa.md, a.md, b.md, ...] do $(FITXER).html: $(FITXER) pandoc --from markdown --to html $(FITXER) -o $(FITXER).html $(FITXER).jpeg: $(FITXER).html wkhtmltoimage $(FITXER).html $(FITXER).jpeg endfor
The way to go about this is to define general build rules: %.html: %.md pandoc --from markdown --to html $< -o $@ %.jpg: %.html wkhtmltoimage $< $@ This tells make how to create HTML files from Markdown files, then how to create JPEGs from HTML files. Once you've done that, all that's needed is to tell make all the output files you're after: all: fitxa.jpg a.jpg b.jpg .PHONY: all make all will figure out what to do to obtain the requested JPEG files. You can still define other rules to process specific Markdown files in a different way: foo.jpg: foo.md # Process foo.md here to build foo.jpg These specific rules take precedence over the generic rules.
apply rules in a list of files in GNU Make (or 'for' instruction in GNU Make)
1,483,481,088,000
Which directories should I expect to have in an install prefix when I'm writing makefiles? I've noticed that in the common prefix /usr, there is no /etc, yet there is an /include dir, which isn't in the root directory. Which paths are hard-coded such as /etc and /var maybe and which directories lie in a prefix? As far as I can see /bin and /lib are standard.
See the FHS (Filesystem Heirarchy Standard) for details: http://en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard and http://www.pathname.com/fhs/
What do I install into a given install prefix
1,483,481,088,000
I have this Makefile where I'm having some troubles on simply set some variables: my_stage: echo "FULL_NAME=$(FULL_NAME)" echo "MY_NAME=$(MY_NAME)" $(eval SOME_NAME=$(shell sh -c "echo ${FULL_NAME} | cut -d"-" -f 2")) echo "SOME_NAME=$(SOME_NAME)" $(eval NAME_ONLY=$(shell sh -c "echo ${SOME_NAME}-only)) echo "NAME_ONLY=$(NAME_ONLY)" $(eval RIGHT_NAME=$(shell sh -c "echo ${SOME_NAME}-right)) $(eval NAME_APPENDED=$(shell sh -c "echo ${RIGHT_NAME}.${MY_NAME})) echo "NAME_APPENDED=$(NAME_APPENDED)" The pretended result is: FULL_NAME=Shop-with-me MY_NAME=Mariana SOME_NAME=with NAME_ONLY=with-only NAME_APPENDED=with-right.Mariana However, the current result is: FULL_NAME=Shop-with-me MY_NAME=Mariana SOME_NAME=with NAME_ONLY= NAME_APPENDED= Can someone help me figuring out what is happening? What I'm doing wrong? I already tried this too (but without success): my_stage: echo "FULL_NAME=$(FULL_NAME)" echo "MY_NAME=$(MY_NAME)" $(eval SOME_NAME=$(shell sh -c "echo ${FULL_NAME} | cut -d"-" -f 2")) echo "SOME_NAME=$(SOME_NAME)" NAME_ONLY = ${SOME_NAME}-only echo "NAME_ONLY=$(NAME_ONLY)" RIGHT_NAME = ${SOME_NAME}-right NAME_APPENDED = ${RIGHT_NAME}.${MY_NAME} echo "NAME_APPENDED=$(NAME_APPENDED)"
When you use curly braces, like ${FOO}, in your command, you refer to a a shell variable, as defined in the shell invoking make. When you use parens, like $(fOO), in your command, you refer to the make's variable. Since you only set make's variables, obviously references to shell variables of the same name would result in empty values. So $(eval NAME_ONLY=$(shell sh -c "echo $(SOME_NAME)-only)) should work. Simple $(eval NAME_ONLY = $(SOME_NAME)-only) should work in a rule. You don't need to invoke shell for that. You can also consider using make's text functions, like subst.
Makefile - Set multiple variables on a single stage
1,483,481,088,000
To embrace the DRY (Don’t Repeat Yourself) principle, I sometimes need to share pieces of shell commands in a Makefile. So there is a recipe somewhere in that file like: shell=/bin/bash # … .ONESHELL: run: # Create bash sub-shell «cmd» var, holding a built-in test as a string @cmd='[ $$(grep -iE _dev </etc/hosts | wc -l) -eq 0 ]' # "$$" tells make to escape the dollar sign. So echoing … @echo "$$cmd" # gives «[ $(grep -iE _dev </etc/hosts | wc -l) -eq 0 ]» as I expected # I need this variable to be expanded and interpreted as a real # built-in square bracket test, so I wrote @$$cmd && echo "Pass ! do more magical things …" || true I expected make to escape $ sign $$cmd ⇒ $cmd which would in turn be expanded within bash context into the bracket test unquoted string … right ? But I get an error instead /bin/bash: line 2: [: too many arguments Does anybody have an idea of why this error is being raised ? Why bash is not given the bracket test I expect ? [ $(grep -iE _dev </etc/hosts | wc -l) -eq 0 ] && echo "Pass!" Thank you.
Variable and command substitions in the shell occur after deciding on command boundaries, and parsing redirections (and assignments). You can put a program name and/or arguments in a variable and substitute them, but not pipes or redirection or other substitutions including $( command ) or assignments or shell keywords like if and for. In this case you could eliminate the pipe and wc and substitution by changing the command and reversing your test: cmd='grep -qi _dev /etc/hosts' # note file as argument not redirection $$cmd || do_blah where the substituted grep command fails (silently) if it doesn't find any match in the file, and if it fails do_blah is executed. In general to use shell syntax (not just program arguments) in a substituted value, you must use eval to execute the substituted value (or values), or else run a child shell like sh -c "$$cmd" (substitute other shell if needed depending on environment and/or command).
Makefile, sqare brackets built-in, variable expansion and command substitution
1,483,481,088,000
I have several Git repositories with LaTeX files that I want to have typeset automatically. The idea is to have a central bash script (run by a cronjob) that executes a bash script in every repository, which (1) pulls new commits and (2) executes make all, which should call latexmk on the changed LaTeX files. The central bash script simply contains lines like: bash ./repos/repo-xyx/cron.sh Then in repos/repo-xyz/cron.sh is something like: cd "$(dirname "$0")" git pull make all cd - And in the Makefile in the same directory: all: $(subst .tex,.pdf,$(wildcard *.tex)) %.pdf: %.tex latexmk -pdf -pdflatex="pdflatex -shell-escape" $< </dev/null In my user's crontab, I have * * * * * bash .../cron.sh 2>&1 > .../cron.log and SHELL=/bin/bash. When the cronjob is executed, I read the following in the log: Already up-to-date. latexmk -pdf -pdflatex="pdflatex -shell-escape" myfile.tex </dev/null .../ (this comes from the line "cd -") As you can see, latexmk is invocated but doesn't do anything. myfile.pdf is not generated. When I run bash cron.sh (as the same user) from the highest directory, this does work. What could cause the Makefile to not execute commands when run from a bash script that is run by a cron job (at least, I think it's make not executing this command)? This is GNU Make 3.81 on Linux ubuntu 3.13.0-51-generic #84-Ubuntu SMP Wed Apr 15 12:08:34 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux.
The problem turned out to be that the path to pdflatex was being defined in my $HOME/.profile. I thus changed the cronjob to: * * * * * . $HOME/.profile; bash .../cron.sh 2>&1 > .../cron.log in accordance with https://unix.stackexchange.com/a/27291/37050.
Latexmk, from Makefile, from bash script, from Cron - Latexmk not being executed
1,483,481,088,000
I want to simulate Tiago robot in Gazebo and I am using ROS available package. Before, I simulated it without problem but right now I can not. I am using ROS melodic and ubuntu 18.04 on KVM virtual machine. When I use catkin_make command to build the workspace, the below error happens: [ 1%] Built target _tiago_pick_demo_generate_messages_check_deps_PickUpPoseGoal [ 1%] Generating dynamic reconfigure files from cfg/SphericalGrasp.cfg: /home/pouyan/tiago_ws/devel/include/tiago_pick_demo/SphericalGraspConfig.h /home/pouyan/tiago_ws/devel/lib/python2.7/dist-packages/tiago_pick_demo/cfg/SphericalGraspConfig.py [ 1%] Linking CXX shared library /home/pouyan/tiago_ws/devel/lib/libposition_controllers.so [ 1%] Built target tiago_pcl_tutorial_gencfg Scanning dependencies of target run_traj_control [ 1%] Built target _tiago_pick_demo_generate_messages_check_deps_PickUpPoseResult Scanning dependencies of target transmission_interface_parser [ 1%] Built target _tiago_pick_demo_generate_messages_check_deps_PickUpPoseActionResult Generating reconfiguration files for SphericalGrasp in spherical_grasps_server [ 1%] Building CXX object tiago_tutorials/tiago_trajectory_controller/CMakeFiles/run_traj_control.dir/src/run_traj_control.cpp.o Wrote header file in /home/pouyan/tiago_ws/devel/include/tiago_pick_demo/SphericalGraspConfig.h [ 1%] Building CXX object ros_control/transmission_interface/CMakeFiles/transmission_interface_parser.dir/src/transmission_parser.cpp.o Scanning dependencies of target gazebo_ros_block_laser Scanning dependencies of target effort_controllers [ 1%] Built target force_torque_sensor_controller [ 1%] Built target actuator_state_controller [ 1%] Built target tiago_pick_demo_gencfg Scanning dependencies of target gazebo_ros_laser Scanning dependencies of target polled_camera_generate_messages_cpp [ 1%] Building CXX object ros_controllers/effort_controllers/CMakeFiles/effort_controllers.dir/src/joint_effort_controller.cpp.o [ 1%] Linking CXX shared library /home/pouyan/tiago_ws/devel/lib/libimu_sensor_controller.so Scanning dependencies of target polled_camera_generate_messages_eus [ 1%] Built target polled_camera_generate_messages_cpp [ 1%] Building CXX object ros_controllers/effort_controllers/CMakeFiles/effort_controllers.dir/src/joint_velocity_controller.cpp.o [ 1%] Built target polled_camera_generate_messages_eus [ 1%] Building CXX object ros_controllers/effort_controllers/CMakeFiles/effort_controllers.dir/src/joint_position_controller.cpp.o [ 1%] Built target position_controllers [ 1%] Building CXX object ros_controllers/effort_controllers/CMakeFiles/effort_controllers.dir/src/joint_group_effort_controller.cpp.o [ 2%] Linking CXX executable /home/pouyan/tiago_ws/devel/lib/pal_gazebo_worlds/increase_real_time_factor [ 2%] Building CXX object gazebo_ros_pkgs/gazebo_plugins/CMakeFiles/gazebo_ros_block_laser.dir/src/gazebo_ros_block_laser.cpp.o [ 3%] Building CXX object gazebo_ros_pkgs/gazebo_plugins/CMakeFiles/gazebo_ros_laser.dir/src/gazebo_ros_laser.cpp.o [ 3%] Linking CXX shared library /home/pouyan/tiago_ws/devel/lib/libjoint_state_controller.so [ 3%] Built target imu_sensor_controller Scanning dependencies of target diagnostic_msgs_generate_messages_lisp [ 3%] Built target diagnostic_msgs_generate_messages_lisp [ 3%] Building CXX object ros_controllers/effort_controllers/CMakeFiles/effort_controllers.dir/src/joint_group_position_controller.cpp.o [ 3%] Built target increase_real_time_factor Scanning dependencies of target diagnostic_msgs_generate_messages_py [ 3%] Built target diagnostic_msgs_generate_messages_py Scanning dependencies of target polled_camera_generate_messages_nodejs [ 3%] Built target polled_camera_generate_messages_nodejs [ 3%] Built target joint_state_controller Scanning dependencies of target polled_camera_generate_messages_lisp Scanning dependencies of target diagnostic_msgs_generate_messages_eus [ 3%] Built target diagnostic_msgs_generate_messages_eus [ 3%] Built target polled_camera_generate_messages_lisp Scanning dependencies of target diagnostic_msgs_generate_messages_nodejs Scanning dependencies of target polled_camera_generate_messages_py [ 3%] Built target diagnostic_msgs_generate_messages_nodejs [ 3%] Built target polled_camera_generate_messages_py Scanning dependencies of target diagnostic_msgs_generate_messages_cpp Scanning dependencies of target MultiCameraPlugin [ 3%] Built target diagnostic_msgs_generate_messages_cpp Scanning dependencies of target gazebo_ros_projector [ 3%] Building CXX object gazebo_ros_pkgs/gazebo_plugins/CMakeFiles/MultiCameraPlugin.dir/src/MultiCameraPlugin.cpp.o [ 3%] Linking CXX shared library /home/pouyan/tiago_ws/devel/lib/libjoint_torque_sensor_state_controller.so [ 3%] Building CXX object gazebo_ros_pkgs/gazebo_plugins/CMakeFiles/gazebo_ros_projector.dir/src/gazebo_ros_projector.cpp.o [ 3%] Built target joint_torque_sensor_state_controller Scanning dependencies of target gazebo_ros_hand_of_god [ 3%] Building CXX object gazebo_ros_pkgs/gazebo_plugins/CMakeFiles/gazebo_ros_hand_of_god.dir/src/gazebo_ros_hand_of_god.cpp.o c++: fatal error: Killed signal terminated program cc1plus compilation terminated. tiago_tutorials/look_to_point/CMakeFiles/look_to_point.dir/build.make:62: recipe for target 'tiago_tutorials/look_to_point/CMakeFiles/look_to_point.dir/src/look_to_point.cpp.o' failed make[2]: *** [tiago_tutorials/look_to_point/CMakeFiles/look_to_point.dir/src/look_to_point.cpp.o] Error 1 CMakeFiles/Makefile2:28217: recipe for target 'tiago_tutorials/look_to_point/CMakeFiles/look_to_point.dir/all' failed make[1]: *** [tiago_tutorials/look_to_point/CMakeFiles/look_to_point.dir/all] Error 2 make[1]: *** Waiting for unfinished jobs.... c++: fatal error: Killed signal terminated program cc1plus compilation terminated. tiago_tutorials/tiago_pcl_tutorial/CMakeFiles/segment_table.dir/build.make:62: recipe for target 'tiago_tutorials/tiago_pcl_tutorial/CMakeFiles/segment_table.dir/src/nodes/segment_table.cpp.o' failed make[2]: *** [tiago_tutorials/tiago_pcl_tutorial/CMakeFiles/segment_table.dir/src/nodes/segment_table.cpp.o] Error 1 CMakeFiles/Makefile2:41209: recipe for target 'tiago_tutorials/tiago_pcl_tutorial/CMakeFiles/segment_table.dir/all' failed make[1]: *** [tiago_tutorials/tiago_pcl_tutorial/CMakeFiles/segment_table.dir/all] Error 2 c++: fatal error: Killed signal terminated program cc1plus compilation terminated. tiago_tutorials/tiago_pcl_tutorial/CMakeFiles/tiago_pcl_tutorial.dir/build.make:62: recipe for target 'tiago_tutorials/tiago_pcl_tutorial/CMakeFiles/tiago_pcl_tutorial.dir/src/pcl_filters.cpp.o' failed make[2]: *** [tiago_tutorials/tiago_pcl_tutorial/CMakeFiles/tiago_pcl_tutorial.dir/src/pcl_filters.cpp.o] Error 1 CMakeFiles/Makefile2:41315: recipe for target 'tiago_tutorials/tiago_pcl_tutorial/CMakeFiles/tiago_pcl_tutorial.dir/all' failed make[1]: *** [tiago_tutorials/tiago_pcl_tutorial/CMakeFiles/tiago_pcl_tutorial.dir/all] Error 2 [ 3%] Linking CXX shared library /home/pouyan/tiago_ws/devel/lib/libtransmission_interface_parser.so [ 3%] Built target transmission_interface_parser [ 3%] Linking CXX shared library /home/pouyan/tiago_ws/devel/lib/libaruco_ros_utils.so c++: fatal error: Killed signal terminated program cc1plus compilation terminated. gazebo_ros_pkgs/gazebo_plugins/CMakeFiles/gazebo_ros_block_laser.dir/build.make:62: recipe for target 'gazebo_ros_pkgs/gazebo_plugins/CMakeFiles/gazebo_ros_block_laser.dir/src/gazebo_ros_block_laser.cpp.o' failed make[2]: *** [gazebo_ros_pkgs/gazebo_plugins/CMakeFiles/gazebo_ros_block_laser.dir/src/gazebo_ros_block_laser.cpp.o] Error 1 CMakeFiles/Makefile2:44726: recipe for target 'gazebo_ros_pkgs/gazebo_plugins/CMakeFiles/gazebo_ros_block_laser.dir/all' failed make[1]: *** [gazebo_ros_pkgs/gazebo_plugins/CMakeFiles/gazebo_ros_block_laser.dir/all] Error 2 [ 3%] Built target aruco_ros_utils c++: fatal error: Killed signal terminated program cc1plus compilation terminated. gazebo_ros_pkgs/gazebo_plugins/CMakeFiles/gazebo_ros_laser.dir/build.make:62: recipe for target 'gazebo_ros_pkgs/gazebo_plugins/CMakeFiles/gazebo_ros_laser.dir/src/gazebo_ros_laser.cpp.o' failed make[2]: *** [gazebo_ros_pkgs/gazebo_plugins/CMakeFiles/gazebo_ros_laser.dir/src/gazebo_ros_laser.cpp.o] Error 1 CMakeFiles/Makefile2:44827: recipe for target 'gazebo_ros_pkgs/gazebo_plugins/CMakeFiles/gazebo_ros_laser.dir/all' failed make[1]: *** [gazebo_ros_pkgs/gazebo_plugins/CMakeFiles/gazebo_ros_laser.dir/all] Error 2 [ 3%] Linking CXX shared library /home/pouyan/tiago_ws/devel/lib/libeffort_controllers.so [ 3%] Built target effort_controllers [ 3%] Linking CXX executable /home/pouyan/tiago_ws/devel/lib/tiago_trajectory_controller/run_traj_control [ 3%] Built target run_traj_control [ 3%] Linking CXX shared library /home/pouyan/tiago_ws/devel/lib/libgazebo_ros_hand_of_god.so [ 3%] Built target gazebo_ros_hand_of_god [ 3%] Linking CXX shared library /home/pouyan/tiago_ws/devel/lib/libgazebo_ros_projector.so [ 3%] Linking CXX shared library /home/pouyan/tiago_ws/devel/lib/libMultiCameraPlugin.so [ 3%] Built target MultiCameraPlugin [ 3%] Built target gazebo_ros_projector Makefile:140: recipe for target 'all' failed make: *** [all] Error 2 Invoking "make -j16 -l16" failed I searched alot but I dont know how solve it. Thanks Solution: According to the answer of Marcus, I increased the memory size of my KVM to double. For editing the memory size of KVM, this Youtube video worked for me: https://www.youtube.com/watch?v=LwLHwXWoYjk
You're running out of RAM, so badly that your operating system kills your compiler. So, reduce the parallelism, or assign more RAM to the building VM.
Invoking "make -j16 -l16" failed
1,483,481,088,000
I was reading a makefile where I found this statement make -C /lib/modules/$(shell uname -r)/build/ M=$(PWD) modules Can anyone explain what is shell here. Command substitution is being tried here but for that only uname -r would have been sufficient. Why is shell word being used and what is its meaning? I have already tried doing man on shell but as I expected it shows nothing. I also tried executing shell uname -r on command line. It does not work. I believe that this variable is defined in make.
I bet this line is within a Makefile, most likely a recursive call to make. Makefile use $(VAR) (or ${VAR}) for local variable or environment variable. note the difference with bash where $(VAR) means "execute VAR and fetch result", thus for similar effect in Makefile anoter syntax is used $(shell uname -r) to sum up $(shell uname -r) will expand to result of uname -r $(PWD) will expand to $PWD's value see GNU Makefile reference
What is the meaning of shell in $(shell uname -r)
1,483,481,088,000
I'm using gnu make and stow to manage some configurations (dotfiles). I have multiple directories in my repo: dotfiles/ ├── Makefile ├── package1/ └── package2/ Currently, my Makefile looks like: PACKAGES = package1 package2 .PHONY: all $(PACKAGES) all: $(PACKAGES) package1: stow --no-fold $@ package2: stow --no-fold $@ I want to define a default rule for packages, so I did: PACKAGES = package1 package2 .PHONY: all $(PACKAGES) all: $(PACKAGES) %: stow --no-fold $@ But that didn't work: $ make make: Nothing to be done for `all'. $ make package1 make: Nothing to be done for `package1'. $ make package2 make: Nothing to be done for `package2'. So: Is it possible to define a "default" rule for directories? If yes, how I do it?
You could replace your rule with: $(PACKAGES): stow --no-fold $@
Make pattern match directories
1,483,481,088,000
I have the makefile below. Why does the rule for the data/vix.csv target always execute when I run make, though? In a recent answer on SO, someone showed me how to update last_updated.txt on 24 hour intervals, even if I was running make frequently. As a result, @echo "\n\n##### updating last_updated.txt#####\n\n" rarely prints when I run make. As far as I can see, that is the only thing that updates last_updated.txt. But is there something else modifying that file? Something else seems to updating it because it's the only dependency of the first rule, and @echo "\n\n######## downloading fresh data and updating vix.csv ##########\n\n" is always printing. This isn't great because this is the portion of the makefile that calls a web api. TS24 := .timestamp24 DUMMY := $(shell touch -d 'yesterday' "$(TS24)") # update data if it has been 24 hours data/vix.csv: last_updated.txt @echo "\n\n######## downloading fresh data and updating vix.csv ##########\n\n" Rscript update_csv.R # signal that it's been >24 hours since data download last_updated.txt: $(TS24) @echo "\n\n##### updating last_updated.txt#####\n\n" touch "$@" .PHONY: run run: @echo "\n\n####### running shiny app ###########\n\n" R --quiet -e "shiny::runApp(launch.browser=TRUE)" ## remove all target, output and extraneous files .PHONY: clean clean: rm -f *~ *.Rout *.RData *.docx *.pdf *.html *-syntax.R *.RData
Run ls -l data/vix.csv to see the actual timestamp of data/vix.csv. Does it reflect the time you last ran make and saw the downloading fresh data and updating vix.csv message? Or does it reflect the timestamp of the source material, wherever Rscript update_csv.R gets it from? Or does it actually get updated at all?
why is this makefile command running so frequently
1,483,481,088,000
I have tests in tests/FILENAME-test.sh and for each one I want to run the script inside a docker container. How can I refactor this Makefile to not use TEST_OUTPUTS like I have? Also, how can I make each docker run command run in parallel? .PHONY: test image TESTS=$(wildcard tests/*-test.sh) TEST_OUTPUTS=$(patsubst %.sh,%.out,$(TESTS)) %.out: %.sh image @sudo docker run -t box-test /bin/bash "-c" "./$^" test: $(TEST_OUTPUTS) @echo image: @sudo docker build -q -t box-test .
Here it is: .PHONY: test image TESTS=$(wildcard tests/*-test.sh) test: $(TESTS) $(TESTS): image @sudo docker run -t box-test /bin/bash "-c" "./$@" image: @sudo docker build -q -t box-test . And for the docker run commands to run in parallel, just use make -j test (you may specify a maximum number of concurrent runs with -j).
How can I refactor this Makefile to not use fake .out outputs?
1,483,481,088,000
According to FreeBSD, from version 10 they use Clang/LLVM instead of gcc. on the surface of it all things should perform as been before even better. but I have faced this reality more than I want to. Some codes can't be compiled this way. For example I tried to compile Snapwm. First native FreeBSD make is actually pmake and that is out of the question. So gmake is our choice. but issuing gmake on the code will produce this error: gcc -g -std=c99 -pedantic -Wall -O2 -pipe -fstack-protector --param=ssp-buffer-size=4 -D_FORTIFY_SOURCE=2 -c -o snapwm.o snapwm.c gmake: gcc: Command not found gmake: *** [snapwm.o] Error 127 So the question becomes how to compile the code that suffers from these set backs.
Sometimes needs some patch. I've createad which you can apply and can build with gmake. I didn't try the compiled snapwm I've tested only building process. diff -ur Nextwm-master.orig/Makefile Nextwm-master/Makefile --- Nextwm-master.orig/Makefile 2014-03-12 19:46:34.000000000 +0100 +++ Nextwm-master/Makefile 2014-04-16 13:07:08.000000000 +0200 @@ -1,12 +1,12 @@ -CFLAGS+= -g -std=c99 -pedantic -Wall -O2 -pipe -fstack-protector --param=ssp-buffer-size=4 -D_FORTIFY_SOURCE=2 +CFLAGS+= -g -std=c99 -pedantic -Wall -O2 -pipe -fstack-protector --param=ssp-buffer-size=4 -D_FORTIFY_SOURCE=2 -I/usr/local/include/ LDADD+= -lX11 -lXinerama -LDFLAGS= -Wl,-O1,--sort-common,--as-needed,-z,relro +LDFLAGS= -Wl,-O1,--sort-common,--as-needed,-z,relro,-L/usr/local/lib EXEC=snapwm PREFIX?= /usr/local BINDIR?= $(PREFIX)/bin -CC=gcc +CC=clang all: $(EXEC) diff -ur Nextwm-master.orig/snapwm.c Nextwm-master/snapwm.c --- Nextwm-master.orig/snapwm.c 2014-03-12 19:46:34.000000000 +0100 +++ Nextwm-master/snapwm.c 2014-04-16 13:03:24.000000000 +0200 @@ -27,6 +27,7 @@ //#include <X11/keysym.h> /* For a multimedia keyboard */ #include <X11/XF86keysym.h> +#include <sys/signal.h> #include <X11/Xproto.h> #include <X11/Xutil.h> #include <X11/Xatom.h>
Building Snapwm on FreeBSD (Problem of gcc and clang)?
1,483,481,088,000
An app is trying to configure with sudo make configure (cd /opt/ioapi-3.2/ioapi ; sed -e 's|IOAPI_BASE|/opt/ioapi-3.2|' -e 's|LIBINSTALL||' -e 's|BININSTALL||' -e 's|IOAPI_DEFS||' -e 's|NCFLIBS|-L/opt/netcdf/lib -lnetcdff -L/opt/netcdf/lib -lnetcdf|' -e 's|MAKEINCLUDE|include /opt/ioapi-3.2/ioapi/Makeinclude|' -e 's|PVMINCLUDE|include |' < Makefile..sed > Makefile ) /bin/sh: Makefile..sed: No such file or directory make: *** [Makefile:211: configure] Error 1 I don't understand what the last part of the command is supposed to do and therefore why it's generating the error. The output from make -n configure is: (cd /home/centos/ioapi-3.2/ioapi ; sed -e 's|IOAPI_BASE|/home/centos/ioapi-3.2|' -e 's|LIBINSTALL||' -e 's|BININSTALL||' -e 's|IOAPI_DEFS||' -e 's|NCFLIBS|-lnetcdff -lnetcdf|' -e 's|MAKEINCLUDE|include /home/centos/ioapi-3.2/ioapi/Makeinclude|' -e 's|PVMINCLUDE|include |' < Makefile..sed > Makefile ) (cd /home/centos/ioapi-3.2/m3tools ; sed -e 's|IOAPI_BASE|/home/centos/ioapi-3.2|' -e 's|LIBINSTALL||' -e 's|BININSTALL||' -e 's|IOAPI_DEFS||' -e 's|NCFLIBS|-lnetcdff -lnetcdf|' -e 's|MAKEINCLUDE|include /home/centos/ioapi-3.2/ioapi/Makeinclude|' -e 's|PVMINCLUDE|include |' < Makefile..sed > Makefile )
Looking at the Makefile.template file of the ioapic-3.2 project on GitHub, it is clear that the sed command that you are seeing is the result of make using the SEDCMD variable: SEDCMD = \ -e 's|IOAPI_BASE|$(BASEDIR)|' \ -e 's|LIBINSTALL|$(LIBINST)|' \ -e 's|BININSTALL|$(BININST)|' \ -e 's|IOAPI_DEFS|$(IOAPIDEFS)|' \ -e 's|NCFLIBS|$(NCFLIBS)|' \ -e 's|MAKEINCLUDE|include $(IODIR)/Makeinclude|' \ -e 's|PVMINCLUDE|include $(PVMINCL)|' like so: configure: ${IODIR}/Makefile ${TOOLDIR}/Makefile (cd $(IODIR) ; sed $(SEDCMD) < Makefile.$(CPLMODE).sed > Makefile ) (cd $(TOOLDIR) ; sed $(SEDCMD) < Makefile.$(CPLMODE).sed > Makefile ) As you can see, it tries to read a file called Makefile.$(CPLMODE).sed. The CPLMODE variable is mentioned several times in comments in the Makefile, but never set to a default value. It seems that valid values for this variable is nocpl, cpl, or pncf. The README.txt file in the repository says to customize the Makefile, which I must assume includes making a copy of Makefile.template called Makefile in the top-level directory of the project and then modifying this. You just haven't made all the necessary modifications to it yet it seems.
Error when using Makefile..sed
1,483,481,088,000
I have a small application that was tested on Linux and it worked. Now I would like to build the same code on FreeBSD. To build it on FreeBSD I needed to change a little my Makefile. Here is my amended version: CXX := gcc LDFLAGS += -L/usr/local/lib -R/usr/local/lib -L/usr/lib -R/usr/lib -L/usr/local/include -R/usr/local/include -L/usr/include -R/usr/include CXXFLAGS += -pedantic -Wall -Wextra -std=c++17 LIBS += -lprotobuf -lstdc++ INCL += -I/usr/local/include SRCS := my_app.cpp \ file1.pb.cc \ file2.pb.cc OBJS := $(SRCS:% = %.o) target := my_app all: $(CXX) $(OBJS) -o $(target) $(LIBS) $(INCL) $(LDFLAGS) %.o:%.cpp $(CXX) $(CXXFLAGS) $(INCL) $(LDFLAGS) -c $^ -o $@ clean: rm -rf *o $(target) The problem is that I get a lot of linker errors. All of them are related to google protobuf functions. I am including one of them below: /usr/local/bin/ld: /tmp//ccpo2Qek.o: in function `main': my_app.cpp:(.text+0x3a4): undefined reference to `google::protobuf::MessageLite::SerializeAsString[abi:cxx11]() const' To build the application I use gmake. I have installed protobuf on my FreeBSD system using pkg install. I can find some google protobuf .h files in /usr/local/include and some protobuf .so libraries in /usr/local/lib. I tried to add these locations to LDFLAGS but it still doesn't work. Thank you in advance for any help.
I replaced gcc with c++ and now it works.
FreeBSD - problem with linking protobuf
1,298,696,585,000
I'm currently adding a little bit of Git functionality to my menu.vim file, and for using a certain command (Gitk) I need to find out Vim's current directory. How does one do that and include it in a command? (i.e. :!echo "%current-directory") I'll admit here that I asked the wrong question - but I figured it out. I'm currently using these in my menu.vim: function g:Gitk() :!cd $(dirname %); gitk endfun function g:GitGui() :!cd $(dirname %); git gui endfun
I think either :pwd or getcwd() is what you are looking for. Just to help memorize things: :pwd => print working directory getcwd() => get current working directory
Vim - Get Current Directory
1,298,696,585,000
Several tools such as grep, py.test, etc ... use the pattern <FileName>:<line number>: to point to errors. For example: ; grep -Hn Common setup.cfg setup.cfg:11: Common How can I modify vim and gvim so that I can invoke them like so: gvim setup.cfg:11: instead of gvim setup.cfg +11 I know that I can write a small shell script that would parse things but I wonder if there is an easier way.
You can use the file:line plugin available here, which does exactly what you want...
vim - open file and goto line number using <filename>:<line nbr>:
1,298,696,585,000
In vim you can open a file under the cursor by using the gf command. One can also easily open that file in a new split window by hitting <c-w> f. This is a really nice and time saving feature. However, I can't figure out, how to open the file in an already opened split window (without creating a new one).
I got all the pieces together to do the trick. The best way is to create a custom mapping for all the commands: map <F8> :let mycurf=expand("<cfile>")<cr><c-w> w :execute("e ".mycurf)<cr><c-w>p Explanation: map <F8> maps on "F8" the commands that follow let mycurf=expand("<cfile>") gets the filename under the cursor and saves it in mycurf <c-w>w changes the focus to the next open split window execute("e ".mycurf) opens the file saved in mycurf finally <c-w>p changes the focus to the previous window (where we actually came from)
vim shortcut to open a file under cursor in an already opened window
1,298,696,585,000
I'm used to modeless editors. Only the past year I've been using vim/gvim, which has a modal approach. I'm used to tabs in all editors, since before vim all of them are used to it. In gvim, you don't necessarily need to use tabs: you can just use buffers. My question is: what are the advantages/disadvantages between these two approaches (buffers vs tabs)? Why do these both options exist?
See https://stackoverflow.com/questions/102384/using-vims-tabs-like-buffers/103590#103590 (or why spliting the vim community among all SE/SO sites is a bad idea)
Buffers or tabs in vim? What are advantages/disadvantages of each approach?
1,298,696,585,000
I'm working on a side project with both JavaScript and SQL source files. When I'm editing the JavaScript, Vim behaves normally. However, when I'm editing the SQL files, there's about a one-second delay between when I press CtrlC and when Vim exits insert mode. When I use the Escape key, or ShiftEnter which I mapped in my ~/.vimrc as a test, it shows no delay. I thought perhaps it was something to do with the syntax highlighting, but when I ran :syntax off to try and fix it, the delay still showed up. I also tried :setf text, which also did not work. I have only a couple of plugins installed (CtrlP, NerdTree, and highlighters for Jade, Less, and CoffeeScript) so I don't think that's what's interfering. Does anyone know what could be going on?
You seem to have a filetype plugin that installs a buffer-local mapping for Ctrl-C. You can check with :verbose imap <buffer> <C-c> It's probably the default one, cp. :help ft_sql. The prefix key can be reconfigured via this (in your ~/.vimrc): let g:ftplugin_sql_omni_key = '<C-j>'
Vim delay when using Ctrl+C, but only in SQL files
1,298,696,585,000
I have to use Ubuntu 10.04 at work, and can't upgrade it. I'm using Vim/gVim 7.2. I want to upgrade it to 7.3 (with Python and Ruby extension support). Which is the best way? Add an entry in sources.lists and install a 7.3 vim/gvim package from it, or build from source? What disadvantages would I have from each approach?
The first place to check is if there's a backport, but there isn't, which isn't surprising since maverick has vim 7.2 too. The next thing to try is if someone's put up a repository with vim 7.3 packages somewhere, preferably a PPA. There are many PPAs with vim, including several with 7.3 (not an exhaustive list). If you don't find a binary package anywhere or don't like the ones you find, the next easiest step is to grab the source package from natty, which has vim 7.3. Download the source package (.dsc, .debian.tar.gz and .orig.tar.gz), then run apt-get install build-essential fakeroot apt-get build-dep vim dpkg-source -x vim_7.3.035+hg~8fdc12103333-1ubuntu2.dsc cd vim-7.3.035+hg~8fdc12103333 # Edit debian/changelog to add an entry with your name and “recompiled for lucid” dpkg-buildpackage -rfakeroot -us -uc -b -nc If all goes well, you'll have binary packages for your distributions. If you run into missing dependencies or compilation errors, this has to be solved on a case-by-case basis. The next thing to try is to compile the upstream 7.3 source with the packaging from your Ubuntu version. This gives you a nice and clean package, but it's a little more involved, so if you don't feel confident in doing this without instructions I recommend you just compile the upstream source. If you end up compiling the upstream source, by default, you'll end up with the files under /usr/local, and it won't be easy to uninstall them, or even to know what you have. Whenever you install something without using the package manager, I recommend installing into a separate directory structure and creating symbolic links in /usr/local (or ~/usr or whatever). Stow is nice for that: Install under /usr/local/stow (or ~/usr/stow or wherever). With many programs, you can use something like ./configure --prefix=/usr/local/stow/vim-7.3. This will put the main binary at /usr/local/stow/vim-7.3/bin, and so on. Run stow vim-7.3 from the /usr/local/stow directory. This creates symbolic links in the “normal” directories, e.g. /usr/local/bin/vim -> ../../stow/vim-7.3/bin/vim. If you ever want to uninstall this program, just run stow -D vim-7.3 to remove the symbolic links, and delete /usr/local/stow/vim-7.3. There is also xstow which is a similar, but more powerful programs (one of its benefits is that it can deal with conflicts).
Best way to upgrade vim/gvim to 7.3 in Ubuntu 10.04?
1,298,696,585,000
I often use Control+L to redraw the screen in Vim. In particular, when I come out of sleep or change monitor configurations I often find that Vim needs to be redrawn. I thought it might be simpler to just add something to my vimrc that redraws on focus. Is there a command that I can add to my .vimrc file that redraws the buffer when the window/buffer gets the focus? In particular, a good command should have no noticeable negative performance or other related side effects.
vim has an event you can bind to for this, FocusGained, combine this with the redraw! command (the ! causes the window to be cleared first) :au FocusGained * :redraw! The syntax here can be read as 'automatically run the command (au is short for autocmd) :redraw! when I get the event FocusGained for any file matching the pattern *'. to make this permanent add it to your ~/.vimrc (the leading : isn't needed in vimrc). to test events you can use a more 'obvious' command like :au FocusGained * :q!
How to automatically refresh Vim on buffer/window focus?
1,298,696,585,000
Should I install vim or GVim ? I develop mainly Ruby on Rails (I also use IDE's, but different topic). Are there any differences or advantage of using Gvim vs vim ?
In gVim you can select the font, vim depends on the font the terminal provides. And it's the same for colour support. Gvim has full support, vim depends on the terminal. Gvim additionally has menus and a toolbar, which vim lacks. One big advantage of vim is that, since it's a terminal application, you have a full fledged terminal at your fingertips. gVim has very rudimentary terminal support. This is handy if you run :make, for instance.
Advantages (or downsides) of GVim over Vim to edit code [closed]
1,298,696,585,000
I search a pattern, say 'START OF ARAMBOL' and it is matched in a file. Now I would like to comment every line from the line no 1 to the current matched pattern line no. I have to do it for more than 200 files. I can do this using perl too but is there any good sed method to do so. Thanks
As a one-liner to demonstrate the concept : echo -e 'a\nb\nc\nPATTERN\nd\ne\nf' | sed '0,/PATTERN/ s/^/#/' You just have to adapt to your context : as for the 'PATTERN' I assumed '#' as the commenting character and regarding how you can apply this to all your files. If they all are 'fileXXX.txt', you can run : sed -i '0,/PATTERN/ s/^/#/' file*txt
Comment from start of file to a pattern matched line using sed
1,298,696,585,000
Powerline is some sort of plugin for Vim and Gvim. In order to be more useful it uses fonts that has some pictures (symbols) added to them. In other words they've "patched" the font set. Recently Powerline stated that the code has been changed and you have to patch your fonts again. The link to the same can be found here. Questions Can I patch my already patched font, again, or should I get a fresh source font? What kind of fonts can be patched. (TTF for example or ...)?
The patch script is accessible here in it's own GitHub repo, titled: powerline-patcher. An experiment I first started by downloading the above patching script. $ git clone https://github.com/Lokaltog/powerline-fontpatcher.git I then selected a sample .ttf file to test out your question. $ ls -lr | grep ttf -rw-r--r--. 1 saml saml 242700 Jul 2 20:29 LucidaTypewriterRegular.ttf Running the font patching script produced the following output: $ scripts/powerline-fontpatcher LucidaTypewriterRegular.ttf The glyph named fraction is mapped to U+2215. But its name indicates it should be mapped to U+2044. The glyph named periodcentered is mapped to U+2219. But its name indicates it should be mapped to U+00B7. The glyph named macron is mapped to U+02C9. But its name indicates it should be mapped to U+00AF. The glyph named stigma is mapped to U+03DA. But its name indicates it should be mapped to U+03DB. The glyph named digamma is mapped to U+03DC. But its name indicates it should be mapped to U+03DD. The glyph named koppa is mapped to U+03DE. But its name indicates it should be mapped to U+03DF. The glyph named sampi is mapped to U+03E0. But its name indicates it should be mapped to U+03E1. The glyph named fraction1 is mapped to U+2044. But its name indicates it should be mapped to U+2215. With the resulting file: $ ls -lr | grep ttf -rw-r--r--. 1 saml saml 242700 Jul 2 20:29 LucidaTypewriterRegular.ttf -rw-rw-r--. 1 saml saml 242576 Jul 2 21:02 Lucida Sans Typewriter Regular for Powerline.ttf If I run it 2 more times on the resulting files, I get the same output each time as above, resulting in files looking like this: $ ls -ltr | grep ttf -rw-r--r--. 1 saml saml 242700 Jul 2 20:29 LucidaTypewriterRegular.ttf -rw-rw-r--. 1 saml saml 242576 Jul 2 21:02 Lucida Sans Typewriter Regular for Powerline.ttf -rw-rw-r--. 1 saml saml 242780 Jul 2 21:04 Lucida Sans Typewriter Regular for Powerline for Powerline.ttf -rw-rw-r--. 1 saml saml 242984 Jul 2 21:07 Lucida Sans Typewriter Regular for Powerline for Powerline for Powerline.ttf All these resulting .ttf files appear valid when I attempt to open them using ImageMagick's display command: $ display Lucida Sans Typewriter Regular for Powerline for Powerline for Powerline.ttf      Takeaways So it would seem you can reprocess font files using the patching script, it's unclear to me why the size keeps growing as you perform this operation, so I would keep the originals handy just in case you encounter problems. If it was me, I would probably ditch the previously patched files and regenerate them just to be on the safe side. References Font patching
Can we patch an already patched font?
1,298,696,585,000
When opening a new tab in gVim (with :tabe), the status line at the bottom of the screen disappears. If I press : and start typing a command I can no longer see the command on the status line. When the gVim window is maximized, opening a tab pushes the status line below the screen. When the gVim window is not maximized, the window will increase in height. This problem happens in Gnome and in Xmonad. I'm looking for a way to get around this issue. Is there a way to force the window to redraw such that the status line fits inside the window?
This happens to me as well. The workaround that I use is to minimize gVim then maximize it again. After that the status bar is visible again. Bug is described here: https://bugs.launchpad.net/ubuntu/+source/vim/+bug/137854 Bug is reported fixed in debian, but the issue is still there with Ubuntu 11.04 (Natty)
gVim opening a tab pushes status line out of window
1,298,696,585,000
How can I remove a line with a specific word but keeping in mind: if another word is found, then don't delete the line. For example, I'm deleting a line having sup or gnd with :%s/.*sup.*//g or :%s/.*gnd.*//g, but it is also deleting some lines which is breaking breaking loop line as well. I don't want to delete lines which have module in the same line as gnd or sup. Any idea how to conquer the line?
You could use: :v/module/s/\v.*(sup|gnd).*// :v/pattern/cmd runs cmd on the lines that do not match the pattern \v turns on very-magic so that all the (, | characters are treated as regexp operator without the need to prefix them with \. Note that it empties but otherwise doesn't remove the lines that contain sup or gnd. To remove them, since you can't nest g/v commands, you could use vim's negative look ahead regexp operator in one g command instead: :g/\v^(.*module)@!.*(sup|gnd)/d :g/pattern/cmd runs cmd on the lines the do match the pattern (pattern)@! (with very-magic) matches with zero width if the pattern is not matched at that position, so ^(.*module)@! matches at the beginning of a line that doesn't contain module. There's also the option of piping to sed or awk: :%!sed -e /module/b -e /sup/d -e /gnd/d /module/b branches off (and the line is printed) for lines that contain module. for the lines that don't, we carry on with the next two commands that delete the line if it contains sup or gnd respectively. or: :%!awk '/sup|gnd/ && ! /module/' If you wanted to find those files that need those lines removed and remove them, you'd probably want to skip vim and do the whole thing with text processing utilities. On a GNU system: find . ! -name '*.back' -type f -size +3c -exec gawk ' /sup|gnd/ && ! /module/ {printf "%s\0", FILENAME; nextfile}' '{}' + | xargs -r0 sed -i.back -e /module/b -e /sup/d -e /gnd/d (here saving a backup of the original files as the-file.back, change -i.back to -i if you don't need the backup). find finds the regular files whose name doesn't end in .back and whose size is at least 4 bytes (smaller ones can't possibly contain a line that contains sup or gnd (and the line delimiter)) and runs gawk with the paths of the corresponding files as arguments. when gawk finds a line in any of those files that match, it prints the path of the file (FILENAME special variable) delimited with a NUL character (for xargs -0) and skips to the next file. xargs -r0 processes that gawk outputs to run sed with those file paths as arguments. sed edits the file in place.
How to delete a line which contains a specific syntax but I want to avoid a line which contains another word
1,298,696,585,000
In gvim editor how to split the editor window horizontally? so that includes two files and can see simultaneously ?
From command line: open specified files as horizontal splits gvim -o report.log power.log area.log open specified files as vertical splits gvim -O report.log power.log area.log Within the editor open file for editing in new horizontal split screen :split filename :sp filename open file for editing in new vertical split screen :vsplit filename :vs filename Check out :h windows.txt for more help
splitting the gvim editor window horizontally
1,298,696,585,000
I would like to run vim via X11 (over SSH). Vim comes pre-installed on my NAS (running Debian), however the GUI is not. When I run vim -g, I get this error: E25: GUI cannot be used: Not enabled at compile time Is there any way to install the GUI by itself, or do I need to reinstall vim altogether?
You can install apt-get install either vim-gtk or vim-gnome or even vim-lesstif to get a vim gui.
Install Gvim after vim
1,298,696,585,000
In Linux vim and gvim, I can make a visual selection by pressing V (e.g., v VISUAL or Shift+V VISUAL LINE), releasing it, and using movement keys (e.g., arrows) to highlight the desired text. In Windows gvim, however, I must hold down Shift while I make my selection. Also, Ctrl+V (block) selection is supplanted by the Windows paste, and I can select text without v at all, like the common Windows method of Shift+arrow keys. I don't want any of this in my vim. This is a minor annoyance at best, but there's probably a way to make Windows use the former (Linux) behavior of only pressing V once. I've scoured the docs to no avail. How can I dispense with all of this Windows stuff and get consistent behavior between my Windows and Linux environments?
When gvim starts, it sources a file called mswin.vim via the _vimrc file. In the mswin.vim file the keys are remapped. You can undo this two ways. One is edit the mswin.vim file and remove the mapping (not recommended). A second easier potentially less invasive way is to edit the _vimrc file. 1. Start gvim as Administrator. 2. Click Edit->Startup Settings (This will load the _vimrc file) 3. The beginning of the file will look something like this. set nocompatible source $VIMRUNTIME/vimrc_example.vim source $VIMRUNTIME/mswin.vim behave mswin 4. Delete the line that sources mswin.vim and sets mswin behavior and change set nocompatible to set compa`enter code here`tible. The end result will look something like this. set nocompatible source $VIMRUNTIME/vimrc_example.vim This should fix your problem. If you want it to behave more like vi than vim, you can change nocompatible to compatible
Vim visual selection method differs between Windows & Linux?
1,298,696,585,000
Is it possible to use gvim --remote-silent and similar as an editor for visudo and sudoedit? Actually, I don't think this is related to the --remote option. Even if I set Defaults editor = "/usr/bin/gvim", the tmpfile gvim loads is blank and editing it has no effect.
gvim returns almost immediately. When sudoedit notices that the editor has returned it will finish reporting no changes. To get sudoedit to work correctly you need to get it to wait until you are finished editing. I normally use -f switch to do this. I have not tried it but the manual seems to support the use of --remote-wait or --remote-wait-silent.
visudo/sudoedit and gvim --remote-silent
1,298,696,585,000
As a user, I want to edit my crontab. crontab -e gvim is launched. It prints "/tmp/crontab.IUVYhK/crontab" [New DIRECTORY] I can write but as soon as I try to write the temporary file, I get this error message: "crontab.IUVYhK/crontab" E212: Can't open file for writing However, I have no issue when using vi as editor: EDITOR=vi crontab -e Is it wrong to set gvim as EDITOR? Shold I use vi? I do very few admin tasks on this desktop machine, so I never ran into any issue.
You must use a synchronous editor for crontab -e, i.e. one where the command doesn't return until the editing is complete. For example, export EDITOR="gvim --nofork" crontab -e An alternative is this, crontab -l > ~/.crontab gvim ~/.crontab # wait until editing is finished crontab ~/.crontab
`crontab -e: E212: Can't open file for writing` when using gvim (works with vi)
1,298,696,585,000
I have key mapping in my ~/.vimrc file that re-indents edited source code on the fly. It looks like as follows: " press F4 to fix indentation in whole file; overwrites marker 'q' position noremap <F4> mqggVG=`qzz inoremap <F4> <Esc>mqggVG=`qzza Short explanation: mq place marker 'q' at cursor position ggVG select all text = re-indnet text `q return cursor back to position stored in 'q' marker zz center the display over the cursor a return to insert mode if called from it It basically works, but has two shortcomings. The first one is that it overwrites the q marker. I used this marker to store cursor position. I choose q because it is very unlikely that I would use this letter as a marker. Despite this, is there any more clever approach to achieve this, without destroying q marker? The second one occurs in the insert mode, when the cursor is on the beginning of a line. In such condition, F4 re-indents as expected, but also moves cursor one position right. I tried to fix it by using <C-o> instead of <Esc>, but it looks like <C-o> is applicable only to editor :commands, not to move commands. How can I fix it?
You can use the last jump mark (m') as a temporary mark. To avoid using a different command to re-enter insert mode (i vs. a), you can use the gi command, which re-enters insert mode at the position where it was last exited: inoremap <F4> <Esc>m'ggVG=``zzgi
Keymaping for re-indent source code in Vim
1,298,696,585,000
When I open a file with gvim with the following command line option, gvim file +5 I would like the current line (the 5th line) be scrolled to the center of gVIM's window, Is that possible? --=--=----=--=----=--=----=--=-- | | | | | | | | | this line was selected | | | --=--=----=--=----=--=----=--=-- By default, it wasn't centered
You can use gvim file +5 -c "normal zz" The -c option allows you to specify an editor command to run when starting vim. zz, as others have mentioned, centers the screen on the cursor line.
Make gvim scroll to the center after opening a file with line numbers?
1,298,696,585,000
I use vim.gtk3 . to navigate current directory in Netrw Directory Listing view. I know I can press x to act like xdg-open. I also know I can press - to navigate to parent directory. But if I press Enter on mp4 binary file, it will show in binary view: Or if I press Enter on a C text file, it will show in normal code view: At this point (binaries or code view), how to use shortcut key and get back to previous Netrw Directory Listing page? I pressed - and it doesn't work. I have to type :q to quit the entire vim.gtk3. Is it possible go back directory listing page from text view after pressed Enter in Netrw Directory Listing?
According to the vim wiki you should be able to do it with Ctrl+^ however it doesn't work on my system, it will only alternate between files. You can pull up a fresh explorer via :Explore or :e .
How to navigate back from text/binary file view in Vim's Netrw directory listing?
1,298,696,585,000
I am currently using gVim as editor and cannot think about going back to mode-less editing. I code in Java for Android and in Python for another project. While trying to set up gVim as my primary Python IDE, I have jumped through hoops looking for and installing plugins but its still not as good as when I use IntelliJ with vim emulation for Android. So my question is, is it worth installing and familiarizing myself with tons of plugins (NERDTree, Command-T, RopeVim; and I believe setting vim up for Android will be asking for even more trouble) or should I just install PyCharm and enable vim emulation? Can gVim ever provide me with useful debugging? (Watch windows, conditional breakpoints, logcat integration etc.)
Vim can get close(r) to an IDE in terms of features via various plugins, but it will always remain a powerful text editor with great extension capabilities. So for anything larger than a hobby project, you'll certainly miss IDE features like debugging, variable inspection, refactoring, find usages, etc. But why not have both? It's easy to set up a command to load the current file (at the current position) in GVIM (with --remote reusing a running instance), and both Vim and IDEs typically handle external file changes quite well. With that, you have the best of both worlds, just at the cost of switching between them (with Alt-Tab), and a little duplicated file / buffer management. I personally use IntelliJ IDEA (with default keybindings, so that I can still use it at a colleague's system) and GVIM together. Major editing is done in Vim, browsing, refactoring and debugging in the IDE.
Vim emulation in IDEs vs gVim as IDE
1,298,696,585,000
I recently had to change my Linux work environment from a personal Ubuntu system (having full admin rights) to a corporate Red Hat system (having very limited control on the system). Both running GNOME. Many things seem to work differently. Above all is the gVim behaviour. I have Vim installed on two Linux machines and one Windows machine. I like the default Windows behaviour, so I set the .gvimrc file as follows: source $VIMRUNTIME/mswin.vim syn on set hls set tabstop=4 set shiftwidth=4 set smartindent set smarttab Some annoying differences I experience, between Red Hat and Ubuntu or Windows are: The undo (Ctrl+Z or Undo button) is acting like in Vi, that is 2nd undo, undoes the 1st one, so the last change is removed and then restored. Instead, it should be an undo history (up to the undolevel variable setting).. When in Insert mode, deleting text with the backspace key does not remove the deleted text from the screen, until I either move away from that line or exit Insert mode. The following variables are set similarly in Red Hat and Windows: nocompatible undodir=. noundofile undolevels=1000 undoreload=10000 Question: How can I make my new Red Hat gVim environment behave like the Windows and Ubuntu one? Vim versions: Red Hat - 7.4 (Aug 10, 2013) Windows - 7.4 (Aug 10, 2013) Ubuntu - 7.2 (Aug 9, 2008)
Sounds like your Vim is in vi-compatible mode; :set compatible? will print compatible then. You need to create a ~/.vimrc file (empty one will suffice) to switch Vim to nocompatible mode. In general, it's recommended to put your customizations there, and leave .gvimrc for the very few GUI-only settings.
Changing gVim behaviour on Red Hat
1,298,696,585,000
I'm still learning C++, but I know already some stuff. I used to use Visual Studio, but after I switched to Debian, I was working with Code Blocks. Recently I heard about using VIM as IDE and started using it. Problem that appeared is autocompletion is not working. I don't know why, but recently C-P/C-N stopped working - it does completion only if a particular word is already in the code or in the code of another tab; so every time I start I have to type every first include/cout/class etc. without autocompletion. I've tried to use YouCompleteMe, but unfortunately: YouCompleteMe unavailable: requires Vim 7.3.584+. I'm using Debian Wheezy (stable with backports) and it doesn't have vim 7.4 in the repos. I tried to add the repo from Jessie and - using low pinn - tried to install vim 7.4, but it wanted to remove a lot of packages (like g++ and many others), so I gave up this idea. I'm not very good on Debian, I'm using it for like ~1.5 year, but more like work-machine, not something I need to learn everything about it, so I need some help in: Installing vim 7.4 on Debian without removing half of the system or, Make YouCompleteMe work or, Make autocompletion in vim work (especially for C++)
install vim 7.4 on Debian without removing half of the system Installing from source is a good choice. Compiling vim is not difficult at all. You can read more details and instructions here. make YouCompleteMe work Installing YouCompleteMe need some things more difficult but it's good documentation at YouCompleteMe github repo, try this and tell us if you have any trouble. make autocompletion in vim work (especially for C++) Another options for C++ autocompletion is using OmniCppComplete, it's easier to use and install than YouCompleteMe.
VIM as c++ IDE - autocomplete
1,298,696,585,000
Is there a way to use vim/gvim as an editor for thunderbird? There was an add-on for it but it is now very out of date.
You could look into the Teledactyl add-on for Thunderbird from 5digits.org. They produce a Pentadactyl add-on for firefox which works nicely for controls, although text boxes are admittedly un-vim-ish. Feature-list says it supports external editors, so gvim could be in your future.
Using vim/gvim as editor for Thunderbird
1,298,696,585,000
How to install gvim on a RHEL server that doesn't have it (for use with SSH with X11 forwarding*)? No sudo access is available, so it has to be in the user's home directory. *Getting all the convenience of having the remote Vim in a window that's separate from the shell.
It isn't very difficult to install vim in your home directory, and I see that you've found a way. However this is not necessarily the best solution. Running vim on the remote machine has the downsides of running a remote editor: it lags if the connection lags; it dies if the connection dies. You can use (g)vim locally to edit remote files. There are two approaches for that. One approach is to mount the remote filesystem over sshfs. Sshfs is available on most unices (but not on Windows). Once mounted, you can edit files with Vim and generally manipulate files as if they were local. Sshfs requires SFTP access on the remote machine. mkdir ~/net/someserver sshfs someserver:/ ~/net/someserver gvim ~/net/someserver/path/to/file fusermount -u ~/net/someserver Alternatively, you can make Vim do the remote access. The netrw plugin is bundled with Vim. gvim scp://someserver/path/to/file A limitation of both approaches is that the files are opened with the permissions of the user that you SSH into. This can be a problem if you need to edit files as root, since SSH servers are often configured to forbid direct root logins over SSH (not so much for security as for accountability — having a trace if a non-malicious root screws up). Vim has a plugin to edit over ssh and a plugin to edit over sudo, but I don't know how to combine the two.
gvim on RHEL (Red Hat Enterprise Linux) install in home directory
1,298,696,585,000
I use rbenv to manage ruby versions. I want to install gvim on my ArchLinux and one of it's dependency is ruby. I'm already use rbenv to install the version 2.0.0-p247 of ruby as root user and set rbenv global 2.0.0-p247,but when I try sudo pacman -S gvim, pacman still install the package ruby-2.0.0_p247-1. How can I let pacman notice the ruby installed by rbenv ?
You cannot. However, you can trick pacman into thinking you have (there are two ways to do this). Simply pass the --dbonly option: pacman -S --dbonly ruby This will commit the transaction to the database (make a record of the install), but not actually download or install any packages. If you want, you could also pass --asdeps to mark it as a dependency. Also to note: ruby may get installed for real on upgrade. I'm not sure. You may want to consider locking the version (it's ok because it's not really installed or critical, but normally you should not do this). Make rbenv provide ruby. You can do this by putting the following line in the rbenv PKGBUILD: provides=('ruby') After doing this, run makepkg again and reinstall the package with pacman -U foobar.pkg.tar.xz. You may have to mess with the version of this. See the wiki page on PKGBUILDS.
How to let pacman notice the ruby installed by rbenv?
1,298,696,585,000
I'm on RedHat enteprise at my workstation where both vim and gvim is installed. when runnint vim --version it's clear that i lack a lot of cool stuff (like clipboard capabilities) When running gvim --version it's clear that my gvim version is fully decked out. I'd like to run vim in the terminal, but I'd also like to use the full capabilities installed with my gvim install. is there a way to run gvim in terminal? something like gvim --no-window or the like? Is there a way to force the vim command to use the backend of gvim, while still being in terminal?
You can run gvim in TUI mode by passing -v, but note that you won't have the X clipboard registers unless you're running it under X.
Run gvim in terminal
1,298,696,585,000
Gvim is set to 78, can't change default text width to "0" at gvim, i edited "C:\Program Files (x86)\Vim_vimrc" set textwidth=0 or set tw=0 but neither helped. How do i disable textwidth parameter?
It depends on how you are testing the feature. According to the documentation, textwidth 'tw' number (default 0) local to buffer {not in Vi} Maximum width of text that is being inserted. A longer line will be broken after white space to get this width. A zero value disables this. 'textwidth' is set to 0 when the 'paste' option is set. When 'textwidth' is zero, 'wrapmargin' may be used. See also 'formatoptions' and |ins-textwidth|. When 'formatexpr' is set it will be used to break the line. NOTE: This option is set to 0 when 'compatible' is set. Presumably you didn't set compatible, but may have set paste. Alternatively, there may be some plug-in which is resetting the value. Or you could be expecting setting textwidth to affect already-entered text: VIM textwidth has no effect According to textwidth=0 and wrapwidth=0 in .vimrc.local not being respected you can see where it's set using :verbose set tw? wm? The answer by @garyjohn gives additional advice on troubleshooting this problem.
Can't change default text width in GVim
1,298,696,585,000
I am working on RedHat RHEL5. I want to change the font in GVIM. The only font format that my GVIM accepts is *-courier-medium-r-normal-*-*-140-*-*-m-*-* It refuses to use Courier\ New or Courier_New names. The default font is ugly and I wanted to change it to something prettier, like monospace font that I use in my terminal, but xfontsel does not show his font. set guifont=* neither works. My questions are: How to "convince" GVIM to accept other system fonts Or, how to install additional fonts so they can be delivered to GVIM in -*-*-*- Morse code format Edit :set guifont=* gives error: Font "*" is not fixed-width Invalid font(s): guifont=* To make the font selectable with xfontsel, additionally I had to use this trick: xset fp+ ~/.fonts/ # maybe unnecessary xset fp rehash fc-cache
Take a look at this tutorial that shows how to install a custom font in your home directory, in a .fonts directory. The tutorial is titled: installing fonts in your home directory on Fedora 12. Once a custom font has been installed here you can use the pull downs in gvim to change the font or run the command: :set guifont=* Which will bring up the dialog for selecting your font in gvim. See this tutorial on the Vim wiki for doing this as well and making them permanent. I'd suggest the Proggy Fonts if you want something that looks good and fits nicely with doing development.
Setting programming font in RHEL 5 + gvim
1,298,696,585,000
I want to set up vim so that it opens automatically in tab and not in buffers. I know that I could use alias gvim='gvim -p' or some such shell mapping but I am wondering if there is a way to do that from vim itself. So, what I want is for gvim ook eek monkey to be equivalent in behaviour to gvim -p ook eek monkey aka a tab is opened for each file/buffer.
See :tabnew and :tabedit in :help tabpage. (I'm not sure if you can (or want to) re-map :edit) (Edit There is a related and helpful SO discussion) (Edit to match your refined question) I doubt it will be less hassle than alias gvim=gvim -p, but using autocmd (and some Vimscript, everything in your .vimrc) this might be possible. (But I'm not knowledgeable enough to go into detail with this.)
File open in tabs automatically
1,298,696,585,000
Any chance to get (g)vim displaying NerdTree and tagbar above each other left to edited file? +-----------+-------------+ | nerd tree | edited file | | contents | | +-----------+ | | tagbar | | | contents | | +-----------+-------------+ I'd like to have it done in .vimrc somehow. Up to now the relevant section in my .vimrc looks like this: " NERDTree shortcut :nmap \e :NERDTreeToggle<CR> " tagbar settings let g:tagbar_left=1 nnoremap <silent> <F9> :TagbarToggle<CR> However when displayed, they're shown like this: +----------+-----------+-------------+ | tagbar | nerd tree | edited file | | contents | contents | | | | | | | | | | | | | | +----------+-----------+-------------+
That will be difficult. Both :NERDTreeToggle and :TagbarToggle use :vsplit internally, and there's no way to simply reconfigure or hook into it. You'd have to write wrappers for your \e and <F9> triggers that detect the current window layout, do the toggling, and then jiggle the windows around to fit your requirements. That last step alone is already quite involved. You have to push one of the sidebar windows down with :wincmd J, then make the right file window full-height again win :wincmd L. You see, it's not easy. What I do instead is always have only one of those plugins active. My personal mappings check for open sidebars, and close e.g. Tagbar before toggling on NERD_tree. That's much easier to implement.
(g)vim - NerdTree and tagbar above each other left to edited file?
1,298,696,585,000
Please, consider a situation, where you find a nice example and want to copy it to your existing code to see, how it works. The indentation is almost never right right away. If there are several lines, line-by-line editing can be tedious. On another question, there were hints on how to add spaces into a block of lines and on another, how to use :paste-option, which is used to control comment-characters when pasting. (Is this right?) Can you use :paste or somehow in other way tell that when pasting, add, say 4 spaces into the front of every pasted line? Late addition: I use "+gP quite often to paste a block of lines. Thus the :paste below sounds very promising.
After pasting, you can do: '[>'] To shift the just-inserted text by 'shiftwidth' columns. You can repeat with ..
Pasting with spaces added into heads of lines (analogy to comment-chars)?
1,298,696,585,000
I installed vim-powerline and was wondering how to change the backlground colour of the normal mode -- currently set to #adf000 according to the Gimp -- to something else. I assume that the change will be in autoload/Powerline/Colorschemes/XXX.vim somewhere, I just cannot find it.
The colours of vim-powerline should be located in your .vim directory. If you use a plugin manager it may be .vim/bundle/ followed by the vim-powerline/autoload/Powerline/Colorschemes tree. The file you are looking for should be the default.vim. The colour setting you are looking for is : . \ Pl#Hi#Segments(['mode_indicator'], {¬ . . \ 'n': ['base03', 'green', ['bold']],¬ . . \ 'i': ['darkestcyan', 'white', ['bold']],¬ . . \ 'v': ['white', 'orange', ['bold']],¬ . . \ 'r': ['white', 'violet', ['bold']],¬ . . \ 's': ['white', 'gray5', ['bold']],¬ . . \ }),¬ n is for normal, i for insert, v for visual, r for replace and s for selection(?). Additional colours can be defined in the call Pl#Hi#Allocate({¬ section.
Vim powerline plugin colour of Normal mode
1,298,696,585,000
When I switch from gvim to another application and then after some time switch back gvim window appears blank with cursor blinking in the middle. Sometimes toolbar and tabs look like a white space. When I opening a new tab, tab bar doesn't refresh itself and shows the same tabs as before opening new tab. When I resizing window everything goes back to normal. How to fix gvim rendering problem in gentoo linux (or how to understand what causes this problem)?
you could try something like: :au FocusGained * :redraw!<CR>
Gtk application (Gvim) rendering troubleshooting
1,298,696,585,000
By convention, our C++ headers live in .hpp files. When I open a gvim window with a .cpp file (so C++ source), then use the open menus, I get file chooser window which allows me to select files for: C++ Source Files (*.cpp, *.c++) C Header Files (*.h) C Source Files (*.c) All Files (*.*) Clearly, none of those will match just C++ Headers -- whatever the extension is. So, my question is: How do I create a new entry for C++ Header Files (*.hpp, *.h++)? Bonus: How do I add (*) to the All Files option? I guess this will be the same method as above.
This can be configured via a buffer-local b:browsefilter variable, which is set in filetype plugins; for C/C++, $VIMRUNTIME/ftplugin/c.vim. To change / override this, just put the following into ~/.vim/after/ftplugin/cpp.vim: let b:browsefilter = "C++ Source Files (*.cpp *.c++)\t*.cpp;*.c++\n" . \ "C Header Files (*.hpp, *.h++)\t*.hpp;*.h++\n" . \ "C Source Files (*.c)\t*.c\n" . \ "All Files (*.*)\t*.*\n"
C++ header/source files in file chooser
1,298,696,585,000
How to add 4 spaces to every line between marks (bound with m-letter and current line). How to do the same when using visual block?
:'x,.s/$/ / Would add 4 spaces at the end of the lines between mark x and the current line. In visual mode, you can press : which will bring :'<,'> and then add s/$/    / to add 4 spaces to the end of each line in that selection. If you want to add 4 spaces at the right edge of the currently selected visual block, just enter A, enter those 4 spaces and Esc.
Applying a change to every line?
1,298,696,585,000
In a particular perl script and I have a copy of the same but with the changes made in it i want to see both the scripts in such a way that changes made in the copy to be highlighted in a split gvim editor,so that I can compare both simultaneously.What should i do for that?. I know to split gvim but I don.t know how to compare
I think the tool you may be looking for is vimdiff vimdiff file.copy file.original
how to see the changes made in a big perl script having the copy of original that is to compare both in a splitted gvim editor simultaneously
1,300,461,063,000
What's the best way to find a laptop with hardware that is amenable to installing and running GNU/Linux? I'm looking for something that will ideally work with GPL'd drivers and/or firmware. I took a quick look at linux-on-laptops.com, but they don't list some of the newer models for Lenovo, for example. Most of the test cases seem to be quite dated. Also, It's not clear where to start looking given so much information. Unfortunately, the FSF website doesn't list many laptop possibilities. LAC, referenced from the FSF website, doesn't mention wireless connectivity as being a feature of their laptops, probably because of firmware issues. I've been looking for a laptop that will accept the ath9k driver because those cards don't require firmware, but getting model type from generic specs pages is not always possible. Searching for lspci dumps online can be can be a roll of the dice. And then there's the issue of what kind of graphics card is ideal from a FSF perspective. From the FSF website: This page is not an exhaustive list. Also, some video cards may work with free software, but without 3D acceleration.
Gluglug and other RYF vendors sell laptops running LibreBoot, a free software, microcode-free bios replacement. LibreBoot supports hardware on which it is possible to remove the Intel Management Engine, a small proprietary operating system on modern Intel machines that has been the attack vector of major security exploits. There is some initial work toward creating a free software replacement for embedded controller firmware, but is apparently not ready for use. SSDs, hard drives and other components unfortunately contain non-free software as well. Systems with the most modern AMD and Intel processors currently cannot be made as freedom respecting as LibreBoot-supported hardware. It is currently necessary to depend upon non-free software if you want to use a laptop. Libreboot does greatly reduce the amount of critical non-free software required to use laptops, desktops and servers.
Ideal Hardware for GNU/Linux Laptop
1,300,461,063,000
I'm looking for a new work laptop for development work on linux, and after testing way too many distros, I pretty much does not want to ever use anything other than Xubuntu or possibly some other Debian/Ubuntu based Xfce setup. What worries me is that most of the recent laptops with Haswell processors I have looked at also supports extremely high resolution, which would require the desktop to use higher DPI settings (HiDPI) in order to be usable. Thus I was wondering, what is the status for HiDPI support in Xfce? (I have found some sources claiming the support is not good at the moment, but I didn't find any info on whether it is being worked on or not!)
XFCE has some support for HiDPI - you can change the setting across all monitors for HiDPI, but it doesn't vary between different screens in the way that it does on a Retina MacBook Pro. I'm using XFCE and Arch Linux on a Lenovo W540 with the high DPI display. Apart from Chrome not supporting HiDPI, things work well.
What is the status of HiDPI support in Xfce?
1,300,461,063,000
I have a PC Oscilloscope Instrustar ISDS205X which I used on Windows 10. Now that I have switched to Linux, I am unable to find the respective drivers for it. I have tried installing it on PlayOnLinux but the software doesn't install and so do its drivers. Is there any method to convert such Windows drivers to run on Linux? (My CPU is i5-4570 and Distro is Debian 10 KDE Plasma)
In short: no. To go further, a driver is a piece of software that interact with the kernel of the operating system. When you're working in kernel world, interoperability doesn't exist. POSIX neither. Everything is totally OS-specific: the architecture, the sub-systems and the way they have been built and designed, the standard library offered by the kernel to driver writer, there's nothing in common between Linux and Windows. The only ways you can get your oscilloscope working under linux is: by using a Windows virtual machine and forwarding the USB device to it (possible with virtualbox or qemu). by doing reverse engineering when using it with a Windows workstation: analyse USB exchanges, try to guess the protocol used and the command passed to achieve this or this operation... it's a very hard and long job ...
Installing Proprietary Windows Drivers on Linux
1,300,461,063,000
As you could notice from the topic it needs to be able get cpu's stepping code properly. As Wikipedia says there're stepping codes like A0, A2, B0 etc. So, commands in linux (ubuntu 16.04) give: # dmidecode -t 4 | grep Stepping | awk '{ printf $8": "$9"\n" }' # Stepping: 2 # lscpu | grep Stepping # Stepping: 2 # cpuid | grep stepping # stepping id = 0x2 (2) # cat /proc/cpuinfo | grep stepping # stepping: 2 The whole outputs: cat /proc/cpuinfo (one core): processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 44 model name : Intel(R) Xeon(R) CPU E5620 @ 2.40GHz stepping : 2 microcode : 0x13 cpu MHz : 2400.208 cache size : 12288 KB physical id : 0 siblings : 4 core id : 0 cpu cores : 4 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 11 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 popcnt lahf_lm epb kaiser tpr_shadow vnmi flexpriority ept vpid dtherm ida arat bugs : bogomips : 4800.41 clflush size : 64 cache_alignment : 64 address sizes : 40 bits physical, 48 bits virtual power management: ... cpuid (part): ... family = Intel Pentium Pro/II/III/Celeron/Core/Core 2/Atom, AMD Athlon/Duron, Cyrix M2, VIA C3 (6) ... (simple synth) = Intel Core i7-900 (Gulftown B1) / Core i7-980X (Gulftown B1) / Xeon Processor 3600 (Westmere-EP B1) / Xeon Processor 5600 (Westmere-EP B1), 32nm ... dmidecode -t 4 (part): ... Signature: Type 0, Family 6, Model 44, Stepping 2 ... Some screenshot from an internet of CPU-Z program: Some screenshot from an internet of CPU-G program: So what't the 0x2 or 2 ? Why not A0 or B1 as in Wikipedia mentioned ? How to get this letter before stepping number ? Best regards, V7
There’s no way to map stepping numbers to stepping names using only information from the CPU. You need to look at specification updates from Intel; these contain descriptions of the errata fixed in various revisions of CPUs, and also contain identification information allowing the various steppings to be identified (where appropriate). For example, for your E8500, the specification update lists two revisions, C0 and E0; C0 corresponds to processor signature 10676h, E0 to processor signature 1067Ah (see table 1 on page 16). The last four bits in these signatures are the stepping values given in /proc/cpuinfo, lscpu etc., and in CPU-Z’s “stepping” field; as you can see, there’s no obvious correlation between the numeric values and the stepping names (6 for the E8500 stepping C0, A for the E8500 stepping E0). Tools such as CPU-Z contain all this identification information and use it to provide stepping names in their GUI.
Get CPU stepping in Linux
1,300,461,063,000
Currently, a causal browse through a number of Linux distros show spotty Poulsbo drivers at best. Has any headway been made recently towards either convincing Intel to coax the driver source out of PowerVR or an acceptable (I can install it without low frame rates, involved steps and without fear that a kernel update will break it) OSS driver solution? I would love to put Linux on my little Acer netbook but I rely on it too much to install a nerfed driver.
There are open-source gma500 drivers in kernel-3.2 by Alan Cox from Intel. They are lacking 2d/3d/video acceleration but hardware should intialize properly. Not sure you'll find it user-friendly, but it is at least "hacker-friendly" - i.e. allowing to hack-in the missing features (acceleration).
What is the state of open-source Poulsbo/GMA 500 drivers?
1,300,461,063,000
What are good external tablet input devices for Linux? It should be used for more convenient use of Inkscape and Gimp. Some characteristics that are useful, I guess: connected via USB included pen with some buttons tablet should only be sensitive to pen touches Open-source drivers (a must) some OS engagement by the vendor Open questions: What are good sizes of such tablets in practice? Is there is some good guide how to setup it under Linux/X? What are other great programs that are really easier to use with a tablet?
I and friends of mine made some good experiences with the tablets from Wacom. The Bamboo series contains different tablets in different pricing categories. My Bamboo for example is connected via USB, the pen as 2 Buttons, the tablet is only sensitive to the pen, has some more buttons and works with my linux out of the box. So this should satisfy your needs. Wacom supports Windows, Mac OS X and Linux without any problems as far as I know. They link to the Linux Wacom Project on their official homepage for driver support. After a little configuration of the input devices it works pressure sensitive with Gimp. For advanced configuration of all tablet buttons and touch sensitive areas theres the Wacom ExpressKeys project, which also works fine under the different distributions. To your questions: What are good sizes of such tablets in practice? This totally depends on your usage of the tablet. Are you just using it as an addition to your mouse? Are you gonna start some kind of digital painting? etc. A common size for the "drawing" area of those tablets is ~ 5.8" x 3.6". This should work fine for the average usage. More important than the size is IMHO the resolution and pressure levels the tablet supports, because this will influent your work. Keep this in your mind when you are comparing tablets. Is there is some good guide how to setup it under Linux/X? The Linux Wacom Project maintains a nice Howto to that topic. Also there are several guides based more ore less on the used distribution, e.g. ARCH and Ubuntu. What are other great programs that are really easier to use with a tablet? I often use my tablet also for audio processing. The editing of different audio tracks with a pen feels much more natural for me.
External tablet input devices for Linux (Inkscape/Gimp)?
1,300,461,063,000
I'm holding out for the ThinkPad Helix 2, which has a detachable tablet as a screen. It comes out later this month. I've heard that installing Linux on bleeding edge technology can be problematic due to driver issues, etc. Would I likely be able to use this computer to its fullest with Ubuntu (or, better yet, some customized Arch Linux installation)? I'm fairly new to Linux in case that matters.
Graphics Analysis HD 4K Support in Kernel 3.10 and UP Device contains Code-Name IVY BRIDGE Intel HD 4K Graphics Chip. IVY Bridge should fall under the MESA DRI on LKDB. Support Options: FOSS MESA DRI Driver or Official Intel i965 Driver. Wireless Analysis MBM Ericson Chipset Support Unsure if this is within the LKDB, but packages exist here.
Does Linux support the ThinkPad Helix 2?
1,300,461,063,000
There are lots of Linux distributions and every one of them is made according to maintainers point of view using different DE, package managers, kernels etc. When some part of your hardware is failing to work properly reporting a bug on that should be forwarded to distro maintainers or directly to kernel maintainers? How a novice user of linux can know where to go when something is not working correctly and all other resources (google, forums) failed to solve a problem?
The first stop is your distribution bug tracker, from there on you will be guided to the next step. It is said that (unless you are able to reproduce the same bug in any distribution or that if you compile from sources and you are able to reproduce it in several systems) you should report downstream (i.e. at your distribution). In doubt always report to your specific distribution bug tracker, be sure to read their guides, so the bug can be fixed in brevity (ie. no missing information that has to be teased out the reporter). There are various guide for each distribution: Ubuntu Debian RHEL based (this seems to be the standard way to report bugs on RHEL and friends) Arch Linux Linux Kernel Unless asked by downstream you MAY never do this.
Where to file a bug?
1,300,461,063,000
I'm checking arch on a virtualbox (running on ubuntu) before I install it on my machine. I have followed the wiki up until the display driver section. lspci gives: VGA compatible controller: InnoTek Systemberatung GmbH Virtual Graphics Adapter I assume this is some sort of virtualbox compatible layer, is there a way to by pass it and test my real display driver (intel of some kind) on the virtual box?
No. It is not possible to provide a VirtualBox VM access to the host video card, only the virtual interface you see listed there. In fact, this is true for most hardware including network cards as well. The primary exception to this is some USB devices and storage controllers that can be revealed to the VM if the host OS is not using them via a special bridge driver. Using a Linux distro in a VM should give you a feel for whether you like the software or not, but it is not a good test of whether it interfaces well with your hardware. Instead you should use a LiveCD or bootable USB release to start it up with full access to your hardware. This will allow you to test all the things you want to checkout without over-writing or re-partitioning your hard-drive until you think it's going to work. As a final note, most Linux distros share relativly the same base of drivers and hardware compatibility. How well they juggle it all varies some, and sometimes one distro will have work-arounds for certain machines that have not made it into the upstream projects, but it's pretty safe to say that if your video card and display works in one Linux distro, it is likely to work in another distro of the same era.
Finding the right display driver for arch installation on a virtual box on Lenovo edge13''
1,300,461,063,000
I'm having trouble using my Apple Magic Keyboard (bluetooth wireless with LiIon battery, Lightning port for charging and tethered usage) with Fedora 25 (kernel: 4.8.15-300.fc25.x86_64). The problem is that when used in wireless mode, the Fn key does not seem to register. I tried xev and the key itself doesn't trigger any event, nor does the key pressed with another key cause the triggered event to be any different compared to the other key just being pressed on its own. The reason why I'd like to use the Fn key is because I want to map Fn + ←/→ to Home and End respectively and also use the multimedia keys which are now function keys by default. The interesting thing is that this keyboard acts as a normal Apple wired keyboard when I connect it with the lightning cable to the computer in which I assume is due to the bluetooth radio not being used and resorting to USB hardware/drivers (perhaps it's registered with a different USB device ID than the original Apple aluminium keyboard, I didn't verify). Doing so allows function key usage and all the tricks like function keys or multimedia keys by default that you find on the internet. However, I would like to have the same features available when using it as a bluetooth keyboard. I would go as far as patching the kernel, but have no idea where to start and how to test and debug (obviously I would like to try out less "invasive" means first). Any idea on how to address this problem are welcome. Update When I read from /dev/hidraw0, I get some activity when hitting the Fn key, so this could mean the fn keypress is registered by the system, but gets lost somewhere along the way... Update2 evtest does not show any event when pressing the Fn key and /dev/input/event4 (which is the event device for the Magic Keyboard) does not trigger an event (other keys do). So I think the problem is that the Fn key gets read by the system (implied by /dev/hidraw0 showing data) but it doesn't get passed on to /dev/input/event4. But this is just speculation as I don't know how the flow of user input data is meant to be working in Linux. Update 3 This is what several fn key presses (press+release) produce: > sudo cat /dev/hidraw2 | hexdump 0000000 0001 0000 0000 0000 0000 0001 0000 0000 0000010 0000 0200 0001 0000 0000 0000 0000 0001 0000020 0000 0000 0000 0200 0001 0000 0000 0000 0000030 0000 0001 0000 0000 0000 0200 0001 0000 0000040 0000 0000 0000 0001 0000 0000 0000 0200 0000050 0001 0000 0000 0000 0000 0001 0000 0000 0000060 0000 0200 0001 0000 0000 0000 0000 0001 0000070 0000 0000 0000 0200 0001 0000 0000 0000 0000080 0000 0001 0000 0000 0000 0200 0001 0000 0000090 0000 0000 0000 0001 0000 0000 0000 0200 00000a0 0001 0000 0000 0000 0000 0001 0000 0000 00000b0 0000 0200 0001 0000 0000 0000 0000 0001 00000c0 0000 0000 0000 0200 0001 0000 0000 0000 00000d0 0000 0001 0000 0000 0000 0200 0001 0000 00000e0 0000 0000 0000 0001 0000 0000 0000 0200 00000f0 0001 0000 0000 0000 0000 0001 0000 0000 0000100 0000 0200 0001 0000 0000 0000 0000 0001 0000110 0000 0000 0000 0200 0001 0000 0000 0000 0000120 0000 0001 0000 0000 0000 0200 0001 0000 0000130 0000 0000 0000 0001 0000 0000 0000 0200 0000140 0001 0000 0000 0000 0000 0001 0000 0000 0000150 0000 0200 0001 0000 0000 0000 0000 0001 0000160 0000 0000 0000 0200 0001 0000 0000 0000 0000170 0000 0001 0000 0000 0000 0200 0001 0000 0000180 0000 0000 0000 0001 0000 0000 0000 0200 0000190 0001 0000 0000 0000 0000 0001 0000 0000 00001a0 0000 0200 0001 0000 0000 0000 0000 0001 00001b0 0000 0000 0000 0200 0001 0000 0000 0000 00001c0 0000 0001 0000 0000 0000 0200 0001 0000 00001d0 0000 0000 0000 0001 0000 0000 0000 0200 00001e0 0001 0000 0000 0000 0000 0001 0000 0000 00001f0 0000 0200 0001 0000 0000 0000 0000 0001 0000200 0000 0000 0000 0200 0001 0000 0000 0000 0000210 0000 0001 0000 0000 0000 0200 0001 0000 0000220 0000 0000 0000 0001 0000 0000 0000 0200 0000230 0001 0000 0000 0000 0000 0001 0000 0000 Weirdly enough, sometimes 2 lines but mostly 1 line is printed after releasing fn. This is what F2 and Fn+F2 respectively look like: sudo cat /dev/hidraw2 | hexdump 0000000 0001 0000 0000 0000 0000 0001 3b00 0000 ^[OQ0000010 0000 0000 0001 0000 0000 0000 0000 0001 ^[OQ0000020 3b00 0000 0000 0000 0001 0000 0000 0000 ^[OQ0000030 0000 0001 3b00 0000 0000 0000 0001 0000 0000040 0000 0000 0000 0001 3b00 0000 0000 0000 ^[OQ0000050 0001 0000 0000 0000 0000 0001 3b00 0000 ^[OQ0000060 0000 0000 0001 0000 0000 0000 0000 0001 ^[OQ0000070 3b00 0000 0000 0000 0001 0000 0000 0000 0000080 0000 0101 0000 0000 0000 0000 0101 0600 ^C Fn+F2: > sudo cat /dev/hidraw2 | hexdump 0000000 0001 0000 0000 0000 0000 0001 0000 0000 ^[OQ0000010 0000 0200 0001 3b00 0000 0000 0200 0001 0000020 0000 0000 0000 0200 0001 3b00 0000 0000 ^[OQ0000030 0200 0001 0000 0000 0000 0200 0001 3b00 ^[OQ0000040 0000 0000 0200 0001 0000 0000 0000 0200 ^[OQ0000050 0001 3b00 0000 0000 0200 0001 0000 0000 ^[OQ0000060 0000 0200 0001 3b00 0000 0000 0200 0001 0000070 0000 0000 0000 0200 0001 3b00 0000 0000 ^[OQ0000080 0200 0001 0000 0000 0000 0200 0001 3b00 ^[OQ0000090 0000 0000 0200 0001 0000 0000 0000 0200 ^[OQ00000a0 0001 3b00 0000 0000 0200 0001 0000 0000 00000b0 0000 0200 0001 0000 0000 0000 0000 0101 00000c0 0000 0000 0000 0000 0101 0600 0000 0000 ^C Update 4 As requested from @dirkt, here's the report descriptor information (I couldn't run the line as per the comment, so here's the full dump; also note that it's now hidraw2 as I had to replace the keyboard): > sudo ./hid-desc /dev/hidraw2 Report Descriptor Size: 171 Report Descriptor: 05 01 09 06 a1 01 85 01 05 07 15 00 25 01 19 e0 29 e7 75 01 95 08 81 02 95 05 75 01 05 08 19 01 29 05 91 02 95 01 75 03 91 03 95 08 75 01 15 00 25 01 06 00 ff 09 03 81 03 95 06 75 08 15 00 25 65 05 07 19 00 29 65 81 00 95 01 75 01 15 00 25 01 05 0c 09 b8 81 02 95 01 75 01 06 00 ff 09 03 81 02 95 01 75 06 81 03 06 02 ff 09 55 85 55 15 00 26 ff 00 75 08 95 40 b1 a2 c0 06 00 ff 09 14 a1 01 85 90 05 84 75 01 95 03 15 00 25 01 09 61 05 85 09 44 09 46 81 02 95 05 81 01 75 08 95 01 15 00 26 ff 00 09 65 81 02 c0 00 Raw Name: Magic Keyboard Raw Phys: 00:c2:c6:f7:eb:57 Raw Info: bustype: 5 (Bluetooth) vendor: 0x004c product: 0x0267
Partial answer: Making sense of the HID infrastructure and the HID raw data (Disclaimer: I've only done all this for USB, but I suppose it will apply in the same or a similar way to Bluetooth). HID devices can send and receive reports in a well-defined format. The format for a particular device is given by the HID descriptor, which for USB is very similar to the other USB descriptors (e.g. lsusb can list them if they are not bound). Details (for USB) can be found in the Device Class Definition for Human Interface Devices (HID) PDF document. Kernel documentation for HID can be found Documentation/hid. As hiddev.txt explains, the dataflow for an event is like this: usb.c --> hid-core.c --> hid-input.c --> input-subsystem In drivers/hid/hid-input.c, in particular in the routine hidinput_configure_usage, a report is parsed according to the HID descriptor. So if you can't see the Fn key, that's where things go wrong. The output seen at hidraw0 looks suspiciously like there are several kinds of reports with different IDs (this report has ID 1, normal keyboard reports have ID 0). But to make sure, we need the HID descriptor(s). HID descriptors are available via an ioctl on the hidraw device. You can use for example https://github.com/DIGImend/usbhid-dump to get the descriptor (USB only), and https://github.com/DIGImend/hidrd to parse it. There's also /samples/hidraw/hid-example.c file in the kernel source that shows how to get the HID descriptor via the ioctl; it can be easily modified to produce an hex-dump similar to usbhid-dump. You'll have to use this for Bluetooth, so I put it in a pastebin. Compile with make. (If you are not used to compiling external projects: Download zip file for both, unpack each into an empty directory, ./bootstrap, ./configure, make. Now you can use the binaries directly, add them $PATH, etc.) Now you can parse the descriptor using sudo ./hid-desc /dev/hidraw0 | tail -n+3 | head -1 | hidrd-convert -ihex -ospec In addition to providing this output (or the hexdump, if anything doesn't work), please test what happens on hidraw if you press the Fn in combination with various other keys (alphabetic, arrows). Also test what happens for normal keypresses. I'm not sure about the best way to proceed if it's not possible to make the kernel recognize the special reports. Maybe the simplest way is to write a C program that analyzes events from hidraw and produces additional input-events, similarly to input-create. Update: The HID descriptor contains an extra 00 at the end. If you remove that, it parses to Usage Page (Desktop), ; Generic desktop controls (01h) Usage (Keyboard), ; Keyboard (06h, application collection) Collection (Application), Report ID (1), ; +00 report id Usage Page (Keyboard), ; Keyboard/keypad (07h) Logical Minimum (0), Logical Maximum (1), Usage Minimum (KB Leftcontrol), ; Keyboard left control (E0h, dynamic value) Usage Maximum (KB Right GUI), ; Keyboard right GUI (E7h, dynamic value) Report Size (1), Report Count (8), Input (Variable), ; +01 modifier Report Count (5), Report Size (1), Usage Page (LED), ; LEDs (08h) Usage Minimum (01h), Usage Maximum (05h), Output (Variable), Report Count (1), Report Size (3), Output (Constant, Variable), Report Count (8), Report Size (1), Logical Minimum (0), Logical Maximum (1), Usage Page (FF00h), ; FF00h, vendor-defined Usage (03h), Input (Constant, Variable), ; +02 vendor Report Count (6), Report Size (8), Logical Minimum (0), Logical Maximum (101), Usage Page (Keyboard), ; Keyboard/keypad (07h) Usage Minimum (None), ; No event (00h, selector) Usage Maximum (KB Application), ; Keyboard Application (65h, selector) Input, ; +03 6 keysym bytes Report Count (1), Report Size (1), Logical Minimum (0), Logical Maximum (1), Usage Page (Consumer), ; Consumer (0Ch) Usage (Eject), ; Eject (B8h, one-shot control) Input (Variable), : +09.0 Report Count (1), Report Size (1), Usage Page (FF00h), ; FF00h, vendor-defined Usage (03h), Input (Variable), ; +09.1 Report Count (1), Report Size (6), Input (Constant, Variable), : +09.2-7 Usage Page (FF02h), ; FF02h, vendor-defined Usage (55h), Report ID (85), Logical Minimum (0), Logical Maximum (255), Report Size (8), Report Count (64), Feature (Variable, No Preferred, Volatile), End Collection, Usage Page (FF00h), ; FF00h, vendor-defined Usage (14h), Collection (Application), Report ID (144), Usage Page (Power Device), ; Power device (84h, power page) Report Size (1), Report Count (3), Logical Minimum (0), Logical Maximum (1), Usage (61h), Usage Page (Power Batsys), ; Power battery system (85h, power page) Usage (44h), Usage (46h), Input (Variable), Report Count (5), Input (Constant), Report Size (8), Report Count (1), Logical Minimum (0), Logical Maximum (255), Usage (65h), Input (Variable), End Collection There is one input event report with id hex 01, one battery status report with id hex 90, one output to set the LEDs as usual, and one vendor-specific feature control. I marked the bytes for the input event report. There's several vendor defined field where we don't know what they do, and have to guess. The input event report consists of 10 bytes, and your examples decode as follows: ID MM VA K1 K2 K3 K4 K5 K6 VB 01 00 00 00 00 00 00 00 00 02 ; press? Fn 01 00 00 00 00 00 00 00 00 00 ; release? Fn 01 00 00 3b 00 00 00 00 00 00 ; press F2 01 00 00 00 00 00 00 00 00 00 ; release 01 00 00 00 00 00 00 00 00 00 ; 01 00 00 00 00 00 00 00 00 02 ; press Fn? 01 00 00 3b 00 00 00 00 00 02 ; press F2 01 00 00 00 00 00 00 00 00 02 ; release F2 (but not Fn?) ID is the report it. MM are the standard 8 modifier bits, which don't have room for the Fn key. K1 to K6 are up to 6 keys pressed simultanously. VA and VB are vendor specific. Assuming you held Fn and just pressed and released F2 in the last example, my guess is that bit 1 in VB represents the modifier for Fn (or at least something related to it). Use hexdump -e '10/1 "%02X ""\n"' to get 9 bytes of output per line, and test this hypothesis by combining Fn with several keys, including those combinations you want to redefine in the end. Update: For completeness and future reference, though I assume it's not relevant anymore for this particular case: It's possible to inject HID events using UHID, see Documentation/hid/uhid.txt and samples/uhid/uhid-example.c in the kernel.
Fn key for Bluetooth Apple Magic Keyboard (2015)
1,300,461,063,000
What video cards for a Desktop computer compatible with Linux can support Dual Monitors? Our start up needs to use two monitors but our default enviroment needs two monitors, can somebody recommend a cheap Video Card that could help us to use two monitors?
It's been years since I last saw a graphics card that wasn't properly supported on Linux, and quite a while since I last saw a single-framebuffer card. The two biggest off-board chipsets, nVidia and AMD (ex ATI) both offer well-supported multi-screen configurations for X11 on Linux (closed-source binary drivers may be needed to enable all features on some chipsets). After a tiny amount of research, I found a sub-£20 card with a well-known chipset and three ports (VGA, HDMI, DVI). You can attach the two digital outputs to two monitors and they'll work fine. I'd optimise on the feature set of the cards, not the availability of double framebuffers. Also, if your workstations have on-board video, and depending on your needs, you can get a card with a single framebuffer (assuming you can find them) and get two displays that way — they don't need to be provided by the same card!
What video cards for a Desktop computer compatible with Linux can support Dual Monitors?
1,300,461,063,000
I've connected the monitors to the dock, and the monitors detect when they are being connected and disconnected, so there doesn't seem to be any issue with the signal as such. All the other plugs on the dock are also working perfectly (power, Ethernet, USB to keyboard and mouse, USB-C to laptop). Basically everything is working fine, but Linux is not detecting the monitors connected to the dock. sudo dmesg --follow does not show anything when disconnecting and reconnecting a monitor. Should this be solvable? I'm running XWayland on GNOME on an up-to-date Arch Linux 5.10.47-1-lts.
What does the lsusb command say about it? If the output line for the dock includes ID 17e9:600a, then it is this one: a DisplayLink dock. DisplayLink docks essentially provide an extra USB-connected almost-GPU that needs its own evdi driver module. The driver package also includes firmware that is needed for the USB-GPU to work, a libevdi library, and a closed-source DisplayLink Manager application. You could get the firmware and the application by extracting the driver package and then build the driver and library from sources available on GitHub. The ArchWiki also seems to have advice on using DisplayLink devices on Arch. As far as I've understood, the procedure should be essentially the same as with the USB-3.0 DisplayLink devices, although your dock uses the newer USB-C connection.
DisplayPort monitors via HP USB-C Universal Dock not detected by HP EliteBook 840 G7
1,300,461,063,000
I've been thinking that I want a mechanical keyboard, currently I'm using a logitech G510 which is perfect with it's 18 programmable keys (I'm using it with this) but I hear a lot of people drooling over mechanical keys, especially programmers, and I do certainly write a lot and on occasion I code too. Not to mention gaming of course. But one thing I really cannot think of losing is the 18(x3) programmables, I don't care about the rest of the features, but being able to open folders, switch workspaces and navigate through my browser with single buttons changes everything. Not to mention it is very efficient. Media keys are also always nice (Play/Pause, Volume wheel) The Corsair Vengeance K-95 was the first keyboard I found that seems to match my needs. Does it work with Linux? (If so, how well?) Can I map custom commands (like terminal commands) to the G keys?
This keyboard doesn't work properly on Linux. The entire keyboard freezes if you press any macro key. To be more precise, a kernel issue is currently in progress[1], and a userpace driver is available with some limitations[2]. [1] Bug 79251 - Keyboard status indicators not functioning properly. [2] K70/K95 RGB (Unofficial) Linux Driver
Corsair Vengeance K-95 Keyboard & Linux [closed]
1,300,461,063,000
I have a Radeon HD 7790 which apparently won't work as well in Linux (haven't tried it yet). My idea is to install Windows as the host and do the Linux work in a VM (which involves stuff that needs 3D acceleration). Could this work?
Depends on the software you use. Most support some level of 3D now - VMware Workstation and VirtualBox both do to some extent. as an aside I have a HD7790 at home and it works fine under Ubuntu 13.04. Use either the open source radeon driver OR get the newest from AMD's website though. The one that comes with Ubuntu is too old to properly recognize the card.
Will Linux as a guest be able to make use of hardware support of a Windows host?
1,300,461,063,000
My choice of UPS is this -- CyberPower CP900EPFCLCD or APC Smart-UPS 750VA (SMT750I). I intend to connect UPS with USB for data transfer, not Ethernet. According to UPS HowTo http://tldp.org/HOWTO/html_single/UPS-HOWTO/, the first one is adviced to run with NUT, and the second with Apcupsd. Are those package equally good? Another issue -- support. When I run into problems, and I would look for some help, with which package (and community related to it) I will solve my problems faster? In short, which package you think will cause less problems with configuring and using (with UPS)? I am asking this, because those UPSs looks almost the same for me (*), the only thing I don't know now is how good are they when working with Linux. If you are a user of one of those UPS+packages, please simply answer with your experience "I had no problems at all" or "I had to install Windows to make it work properly". You don't have to know both of them to answer. (*) The first one is cheaper (price for device, and less energy consumed), the other provides longer power support. For me it is a draw at this point.
I have used NUT with a wide variety of APC UPS models. Support has gotten event better over the years. I would recommend NUT. It works well when supporting a single server single UPS setup and any number of more complicated configurations. I generally recommend NUT whenever I need to monitor a UPS and shutdown appropriately. Consider enabling the CGI script which will allow you to monitor, query, and configure the UPS using a web browser. The command line utilities work well. Configuration is a little complicated, but each configuration file has a single purpose. Once you get the configuration done, it is pretty well set and forget. Using the default configuration your server won't shutdown until the UPS reports low-battery. If desired you can log data to syslog, or use a custom script to writer your own log.
How good is the support for NUT-based UPSs against APC?
1,300,461,063,000
I am using Linux (I have not tried the following on Windows). I live in Europe. I have a zone 1 DVD that I can read with VLC and a DVD reader connected to the SATA port of an old computer. Problem: when the DVD reader is used outside this computer (using a SATA/USB converter) it is no more able to read the zone 1 DVD! I have been able to check this with another DVD reader, same result: it reads the DVD when connected to the SATA port, but not when used externally. Here is what VLC writes on standard output: libdvdnav: Using dvdnav version 5.0.3 libdvdnav: DVD Title: IDIOCRACY_SIDEA libdvdnav: DVD Serial Number: 3554980E libdvdnav: DVD Title (Alternative): IDIOCRACY_SIDEA libdvdnav: DVD disk reports itself with Region mask 0x00fe0000. Regions: 1 libdvdread: Attempting to retrieve all CSS keys libdvdread: This can take a _long_ time, please be patient libdvdread: Get key for /VIDEO_TS/VIDEO_TS.VOB at 0x00000130 libdvdread: Elapsed time 0 libdvdread: Get key for /VIDEO_TS/VTS_01_1.VOB at 0x000004cd libdvdread: Elapsed time 0 libdvdread: Get key for /VIDEO_TS/VTS_02_1.VOB at 0x000005a4 libdvdread: Elapsed time 0 libdvdread: Get key for /VIDEO_TS/VTS_03_1.VOB at 0x000011c8 libdvdread: Elapsed time 0 libdvdread: Get key for /VIDEO_TS/VTS_04_1.VOB at 0x0000fd1f libdvdread: Elapsed time 0 libdvdread: Get key for /VIDEO_TS/VTS_05_0.VOB at 0x000228bc libdvdread: Elapsed time 0 libdvdread: Get key for /VIDEO_TS/VTS_05_1.VOB at 0x0002e604 libdvdread: Elapsed time 0 libdvdread: Get key for /VIDEO_TS/VTS_06_1.VOB at 0x00211ea2 libdvdread: Elapsed time 0 libdvdread: Found 6 VTS's libdvdread: Elapsed time 0 libdvdnav: Suspected RCE Region Protection!!! libdvdnav: Suspected RCE Region Protection!!! libdvdnav: Suspected RCE Region Protection!!! When one of the DVD readers is used externally VLC stops here; when used internally with the SATA port, it starts reading the DVD without problem. But in both cases the standard output is the same as above. Any idea to explain this behavior? Why does it work better when the DVD is connected internally? I believed that the "regionalization stuff" was encoded in the DVD reader itself? Thanks in advance, Julien Edit: More details. In fact I have found another zone 1 DVD in my collection: "TAKEN". It is read without problem by an old external DVD region-free reader (in a USB external box): $ sudo regionset /dev/sr1 regionset version 0.1 -- reads/sets region code on DVD drives Current Region Code settings: RPC Phase: II type: NONE vendor resets available: 4 user controlled changes resets available: 5 drive plays discs from region(s):, mask=0xFF Would you like to change the region setting of your drive? [y/n]:n When reading the DVD, VLC writes: libdvdnav: Using dvdnav version 5.0.3 libdvdnav: DVD Title: TAKEN libdvdnav: DVD Serial Number: 2ef5a0a4 libdvdnav: DVD Title (Alternative): libdvdnav: DVD disk reports itself with Region mask 0x00f60000. Regions: 1 4 whereas my DVD that cannot be read in my original post ("IDIOCRACY_SIDEA") is region 1, not "1 4" as Taken above: libdvdnav: Using dvdnav version 5.0.3 libdvdnav: DVD Title: IDIOCRACY_SIDEA libdvdnav: DVD Serial Number: 3554980E libdvdnav: DVD Title (Alternative): IDIOCRACY_SIDEA libdvdnav: DVD disk reports itself with Region mask 0x00fe0000. Regions: 1 Is this expected that the region-free DVD reader is able to read "region 1 4", but not a "region 1" DVD? I remark that a "zone 2 DVD" chosen at random in my collection yields the following VLC output: libdvdnav: Using dvdnav version 5.0.3 libdvdnav: DVD Title: OBLIVION libdvdnav: DVD Serial Number: 42c77106 libdvdnav: DVD Title (Alternative): G7_R1 libdvdnav: DVD disk reports itself with Region mask 0x00f50000. Regions: 2 4 So it is not really a "region 2" DVD, but "region 2 4"; VLC reads it without problem in any of my DVD readers. What is surprising is that any of my two region-free DVD reader becomes able to read "IDIOCRACY_SIDEA" without problem once directly connected to the SATA port of the mother board of an old computer (see my original post). N.B.: Another DVD reader (a third) being "zone 2" is unable to read both "TAKEN" and "IDIOCRACY_SIDEA" (without surprise): $ sudo regionset /dev/sr0 regionset version 0.1 -- reads/sets region code on DVD drives Current Region Code settings: RPC Phase: II type: SET vendor resets available: 4 user controlled changes resets available: 4 drive plays discs from region(s): 2, mask=0xFD Would you like to change the region setting of your drive? [y/n]:n
In the beginning, the first computer DVD drives were so-called "RPC I" drives, which would let the CPU deal with large parts of the "regionalization stuff". This turned out to be easy to circumvent, so for quite a while any computer DVD drives on the market have all been "RPC II" drives, which will indeed handle the "regionalization stuff" internally. But even a "RPC II" drive still needs to be asked to do that, and apparently your SATA/USB converter fails to pass through the necessary commmands for that. Also, it's not just about regionalization: the original intent of the DRM scheme on the DVDs was to make it impractical to use the multimedia data on DVDs by anything other than authorized player software, to stop/discourage easy copying of the digital data. Or at the very least, force the copiers to use methods that would cause detectable loss of quality. So the lack of support of the regionalization-related commands in the converter may very well be part of the DRM scheme: hardware manufacturers are supposed to implement those only if appropriately licensed and under the conditions specified by those licenses, under threat of getting sued on patent violations and/or for manufacturing a "DRM circumvention device". libdvdnav: Suspected RCE Region Protection!!! This indicates the libdvdnav is detecting that the disc itself might be using an "enhanced" form of region protection. Basically, the disc includes some code that is run in a VM within the player, and that code may also query which region(s) the drive will support. If it gets an answer that indicates more than one region, or that the drive's region code is unset, it will refuse to play the rest of the content. Not all discs have this "enhanced" region protection. $ sudo regionset /dev/sr1 regionset version 0.1 -- reads/sets region code on DVD drives Current Region Code settings: RPC Phase: II type: NONE vendor resets available: 4 user controlled changes resets available: 5 drive plays discs from region(s):, mask=0xFF This output indicates the drive is of the "RPC II" type as I mentioned earlier, but it looks like its region setting has never been actually set to any value. This might mean that the drive's region-freeness could be implemented with so-called "auto-reset" firmware, which will conveniently forget any region settings (and the fact that such settings may have been made previously) whenever power is removed, and/or when the drive tray is opened. If that's true, then you might want to try using "regionset" to set the drive to region 1 and then play the problem disc. If the setting persists, and the "user controlled changes resets available" counter decrements and stays decremented, the drive might not be really region-free after all. But if the disc plays, and then the drive forgets the setting after the disc is removed/the drive is unpowered, then that might be just what you'll need to do with RCE discs.
DVD reader able to read a zone 1 DVD only internally
1,300,461,063,000
I have heard it said before that Ubuntu has the best hardware support of any Linux distro, but I'm confused how that could be the case. Don't the drivers go into the kernel, which means the only thing that should matter for hardware support is what kernel version you're using? I know the non-sourced drivers are stripped out in distros that use the Linux-libre kernel, but set aside those for a moment--is there any particular reason why some hardware would work on Ubuntu but not Fedora/Arch/SUSE when they're on the same kernel version?
Short answer: Yes, but I'm lying. Long answer: ultimately what you need to support some hardware is driver. Some drivers aren't open source, which makes it harder for them to be fixed, updated and adapted to changes. Some drivers are also compiled in kernel, so you might need to recompile your kernel if you wish to use these (rather exotic) features. However, if we compare Gentoo - the distro known for compiling (almost) everything from source code and doing things from scratch, with Ubuntu - the distro which's "noob-friendly", we will see that if you want to get your standard laptop configuration(webcam, microphone, speakers and optimus dual graphics card setup), you need to do much more on Gentoo side - you need to find proper drivers to compile, compile them, and set up configuration so that X recognizes both cards. In Ubuntu it usually "just works" or is fixable by few simple commands. However, ultimately you will be able to receive the same support on both of the distros. That's why I'm lying. The true difference llies in ease of using the device. Ubuntu is "plug-and-play", Gentoo requires some handwork.
Do Linux distros have different levels of hardware support? [closed]
1,300,461,063,000
Note: I thought about it and really felt like it belongs here. It's not about hardware, such as it is about Linux drivers/compatibility. Am I wrong? I'm not new to Linux - but I mostly administered VMs here and then. While considering which board to buy for my new PC (, and while thinking of Microsoft's announce about 22H2 being the last Windows 10 update ), I started to entertain with the thought of installing Linux on my main workstation. I will probably buy the AMD 7900X CPU, and am considering these motherboards: Gigabyte X670 GAMING X AX Gigabyte X670 AORUS ELITE AX I'm mostly wondering about the motherboard/chipset compatibility with Linux, since the rest ( HDD, SSD, GPU, RAM ) should already be supported. I guess it will work, but I preffer to buy something that'll work better. Where can I find such info? Did anybody had an experience with one of these motherboards? Gigabyte's website only shows Windows 10/11 downloads Ubunto only shows prebuilt in their website ( and most of them are OLD ) Thanks for the help!
I thought about it and really felt like it belongs here. It's not about hardware, such as it is about Linux drivers/compatibility. Am I wrong? no I think you are 100% correct Did anybody had an experience with one of these motherboards? with one of those boards specifically - no. But my asrock z97 board on home pc, the sound works with RHEL/Centos 7.9 but not with RHEL 9. I've also experienced outright failure of linux to install on certain motherboards particularly new'ish ones (this was a few years back). It seems like it's a probability, nothing more, that older motherboards have a better chance of working with linux than newer ones. Especially with consumer home desktop pc's, not so much Dell or HP style desktops that you would find in a business setting. I don't think what you're asking is a thinly veiled shopping recommendation. It's a very good, specific question - will [a specific version & distribution of] Linux run on my hardware. If it don't and you require linux, you have a problem and have to get different hardware or you're not going to function. sorry I couldn't provide you an actual answer to your question, but I too would like to see an official list, somehow, that shows for a given Linux version and distribution its compatibility with hardware it is planned to be run on. Is such a thing soley a requirement of the Linux kernel, that distributions such as Redhat or Ubuntu then decide to use? Or is it at the distribution level and not the kernel choice that will determine whether Linux will work, because my experience of sound working in RHEL 7.9 then not in RHEL 9 - did the kernel folks forget to include a driver or did Redhat? And then if the answer there is just use Ubuntu instead then I don't know if that will work until I try which is unacceptable (certainly unacceptable in a business environment). We just assume and take for granted linux will/should work, and that's the problem!
Hardware ( motherboards, specifically ) compatibility with Linux
1,300,461,063,000
My graphic card is not recognized on my laptop with Debian Jessie installed and a Nvidia Geforce GTX 850M. glewinfo tells me it uses Mesa DRI with Intel (OpenGL 3.0) instead of Nouveau with the actual GPU (OpenGL 4.4+). nvidia-detect can't find my graphic card. lspci identifies my graphic card as a 3d controller while the web tells me it should be identified as a VGA controller. I tried Bumblebee because I'm pretty sure my laptop includes that Optimus stuff but it didn't change anything. How to make my laptop to recognize my GPU? Is it a matter of etc config files or something? I would like to stick with Nouveau driver. However if there is a "debian" way (e.g. apt-get) to install the official Nvidia driver, I'll take it. Thank you, Here's some news. I partially recovered my desktop. I apt-get install xserver-xorg-video-intel|nouveau|nvidia (yes, everybody!). I didn't remove xorg.conf generated by nvidia-xconf. I just change driver "nvidia" to "intel". I followed punctiliously this guideline from ArchLinux community. I succeeded to run Bumblebee and I could be able to run optirun glxgears. But now, my desktop is at 640x480 instead of 1280*1024. It's probably a separate problem. Here's my dpkg -l|grep nvidia ii bumblebee-nvidia 3.2.1-7 amd64 NVIDIA Optimus support using the proprietary NVIDIA driver ii glx-alternative-nvidia 0.5.1 amd64 allows the selection of NVIDIA as GLX provider ii libegl1-nvidia:amd64 340.65-2 amd64 NVIDIA binary EGL libraries ii libgl1-nvidia-glx:amd64 340.65-2 amd64 NVIDIA binary OpenGL libraries ii libgl1-nvidia-glx:i386 340.65-2 i386 NVIDIA binary OpenGL libraries ii libgl1-nvidia-glx-i386 340.65-2 i386 NVIDIA binary OpenGL 32-bit libraries ii libgles1-nvidia:amd64 340.65-2 amd64 NVIDIA binary OpenGL|ES 1.x libraries ii libgles2-nvidia:amd64 340.65-2 amd64 NVIDIA binary OpenGL|ES 2.x libraries ii libnvidia-eglcore:amd64 340.65-2 amd64 NVIDIA binary EGL core libraries ii libnvidia-ml1:amd64 340.65-2 amd64 NVIDIA Management Library (NVML) runtime library ii nvidia-alternative 340.65-2 amd64 allows the selection of NVIDIA as GLX provider ii nvidia-detect 340.65-2 amd64 NVIDIA GPU detection utility ii nvidia-driver 340.65-2 amd64 NVIDIA metapackage ii nvidia-driver-bin 340.65-2 amd64 NVIDIA driver support binaries ii nvidia-installer-cleanup 20141201+1 amd64 cleanup after driver installation with the nvidia-installer ii nvidia-kernel-common 20141201+1 amd64 NVIDIA binary kernel module support files ii nvidia-kernel-dkms 340.65-2 amd64 NVIDIA binary kernel module DKMS source ii nvidia-modprobe 340.46-1 amd64 utility to load NVIDIA kernel modules and create device nodes ii nvidia-settings 340.46-2 amd64 tool for configuring the NVIDIA graphics driver ii nvidia-support 20141201+1 amd64 NVIDIA binary graphics driver support files ii nvidia-vdpau-driver:amd64 340.65-2 amd64 Video Decode and Presentation API for Unix - NVIDIA driver ii nvidia-xconfig 340.46-1 amd64 X configuration tool for non-free NVIDIA drivers ii xserver-xorg-video-nvidia 340.65-2 amd64 NVIDIA binary Xorg driver Link to my xorg.conf Note: This file is not in /etc/X11/xorg.conf.d but directly in /etc/X11/
The poster has a Nvidia Optimus laptop. It turns out, per the Bumblebee page on the Debian Wiki, that you need to do: apt-get install bumblebee-nvidia primus and remove any existing xorg.conf and prevent debconf from creating a xorg.conf during the installation of the packages above. @Spiralwise confirmed that this works for him. Note courtesy of @Spiralwise: once Bumblebee-nvidia and Primus are installed, software that need to be run with GPU must be launched like this: primusrun my_program.
My graphic card is not recognized on laptop/debian
1,300,461,063,000
Do Ryzen 3 2200u laptops work without issues with stock install of Debian 9 or Ubuntu 18.04? I do not want to install nor patch nor compile! I have to choose between Core i3 6006u and Ryzen 3 2200u. I know that Intel SoCs work out of the box on Linux. So they seem like a safe choice. Any experiences? Does anyone of you own a Ryzen3 laptop and tried Linux on it?
I don't know if this helps you, but I have tried Debian(Kali-rolling) on a Tower with Ryzen3 2200. it worked without problems.
Ryzen 3 2200u laptop linux compatibility
1,300,461,063,000
If I install Linux on a USB stick using computer A, then plug it into computer B with different hardware and try to boot and work on it, should I generally expect it to work? Or should I rather accept that Linux, when installed (as opposed to using Live CD/USB), gets tied to the hardware and is generally not supposed to work seamlessly on different hardware? If the answer is "it depends", let's narrow the question: All hardware is x86. Nothing fancily customized, just stock laptops/desktops currently available on the market; Distros: latest OpenSUSE, Ubuntu or Cubes OS with default settings; No fancy software, just web/office etc. The background behind this question is that I am deciding whether to have separate USB stick Linux installations for each computer I have, or just clone the same one.
I've done it; when I got a new laptop about 2 years ago I just pulled out my old hard drive and put it into the new one. A couple months ago I upgraded the OS (Debian stable) and things are still working fine. The only thing I noticed is that instead of eth0 and wlan0 I have eth1 and wlan1. Generally, Linux installations include lots of drivers you don't need for your hardware, so if you add or change any hardware in the future, the new hardware will "just work." If you stray from common distros or start customizing them by blacklisting hardware you don't have or removing drivers or modules from the hard drive, you might have problems, but most likely your biggest issue will just be finding a network card that your distro doesn't have good support for. If you have separate sticks, they might feel "cleaner" (as you install software you don't end up using, etc., then switch to a "new" stick) over time, but cloning will probably remove some maintenance headaches like different passwords, different software versions, etc.
How tied are Linux installations to hardware? [duplicate]
1,300,461,063,000
I took out a my Intel dual band Wireless AC 7260 card model number 7260HMW from another laptop. I wanted to install it in my thinkpad. Upon doing so and rebooting, the computer said unauthorized wifi card and I need to remove it. I read that Lenovo does this to force the use of IBM wifi cards. I am currently running Ubunut 14.04 and I know it has native support for this card whereas 13 did not. Is there a way to get around bios reading the wifi card?
Not unless you're willing to replace the BIOS. IBMs are usually considered "corporate" machines and this restriction is a "feature" so that you as a user can not install an unsecured or untested Wi-Fi card and bring down your corporate network. Even in the "home" space, it is still considered a "security feature". This is a design decision by IBM and there's not going to be a good way around it. Your best bet is to replace the computer or buy a supported Wi-Fi card. Also, keep in mind that some platforms (There is an Intel-based one that I can't remember at the moment) are based on CPU, Wi-Fi, and GPU all working together in unison to achieve "better" battery life v.s. performance. If your system is using a platform like this it may not be able to run a Wi-Fi card that does not conform.
Installing an unauthorized wifi card in a Lenovo thinkpad edge 14
1,300,461,063,000
I'm looking for a new desktop (tower) system to run Debian 11. I'd like to avoid having to download proprietary drivers for the hardware to work properly. Requirements: Reasonably recent CPU (Intel i7 or better equivalent) 16GB of RAM (upgradeable) Two screens with resolutions higher than 1920x1080 Especially with point 3 I had a bad experience in the past: On an older system with Intel UHD 550 (IIRC) graphics hardware, I had to add an external card (GeForce GT730) to even recognise my bigger screen, which seemed to work fine with the free Nouveau drivers, but crashed my XFCE session when sharing my screen on Zoom. I had to install proprietary NVIDIA drivers which are working OK now. Is there a graphics configuration that works without having to install proprietary drivers? Are there any built-in options (such as Intel UHD 770), or will I need to get an extra card? Are AMD systems/graphics cards better supported by free drivers than Intel ones? If I need an external card, which one would have the best (Free Software) driver support? NB: There's no need for fancy 3D gaming acceleration.
After reading the Phoronix article Intel Arc Graphics Running On Fully Open-Source Linux Driver, I decided for a system based around the following hardware: INTEL ARC A750 GRAPHICS CARD, 8GB GDDR6 Intel Core i7 13700KF CPU MSI PRO B760M-P DDR4 mATX Motherboard Here is my experience: Debian 11 (bullseye): Booting the live ISO used only one of my 1920x1080 screens (the other one wasn't detected) at its native resolution, albeit using the CPU-based (llvmpipe) renderer. I found this encouraging, so I installed the system. The installed system also used only one screen, but at resolution 1024x768 which is pretty bad. Since the Phoronix article mentioned I need a newer kernel than bullseye's default 5.10, I managed to upgrade it to 6.0 (via Debian Backports). I also installed the i915 firmware as per the helpful example in the Firmware missing from Debian page. Still no improvement to the resolution, renderer or second screen though. The Phoronix article also mentioned Mesa should be >=22.2, unfortunately I couldn't find a way to install that on the system due to package version conflicts (any hints on how to do this are welcome). At this point, I gave up on Debian 11 (bullseye). Debian 12 (bookworm): I found the package versions of bookworm more promising as the kernel is already >=6.0 (currently 6.1.0-3) and mesa >=22.2 (currently 22.3). I used the Debian Installer Bookworm Alpha 2 release. There's no live ISO (yet), so I did a straight install. Again, the installer image used the (one) screen's native resolution of 1920x1080, but the installed system went back to 1024x768 and again, the 2nd screen wasn't recognised. Since the bookworm release, the firmware-misc-nonfree package (which includes the Arc firmware) is in the non-free-firmware section (it used to be in non-free), so that section needs to be added to the /etc/apt/sources.list file. Still, the i915 kernel module wasn't used yet; dmesg |grep i915 revealed that the kernel needs an extra parameter. This issue was already mentioned in the Phoronix article and still applies to the bookworm kernel. To add the parameter i915.force_probe=56a1 to the kernel command line (check with cat /proc/cmdline), I needed to extend the value of GRUB_CMDLINE_LINUX_DEFAULT in /etc/default/grub to include i915.force_probe=56a1, followed by running sudo update-grub to apply it. And after a reboot, success: Both screens came up at their native resolution and the glxinfo |grep renderer returned Mesa Intel (R) Arc(tm) A750 Graphics (DG2) instead of llvmpipe.
Hardware recommendations for new Debian 11 desktop: Mostly free graphics drivers?
1,346,971,085,000
Say I have a huge text file (>2GB) and I just want to cat the lines X to Y (e.g. 57890000 to 57890010). From what I understand I can do this by piping head into tail or viceversa, i.e. head -A /path/to/file | tail -B or alternatively tail -C /path/to/file | head -D where A,B,C and D can be computed from the number of lines in the file, X and Y. But there are two problems with this approach: You have to compute A,B,C and D. The commands could pipe to each other many more lines than I am interested in reading (e.g. if I am reading just a few lines in the middle of a huge file) Is there a way to have the shell just work with and output the lines I want? (while providing only X and Y)?
I suggest the sed solution, but for the sake of completeness, awk 'NR >= 57890000 && NR <= 57890010' /path/to/file To cut out after the last line: awk 'NR < 57890000 { next } { print } NR == 57890010 { exit }' /path/to/file Speed test (here on macOS, YMMV on other systems): 100,000,000-line file generated by seq 100000000 > test.in Reading lines 50,000,000-50,000,010 Tests in no particular order real time as reported by bash's builtin time 4.373 4.418 4.395 tail -n+50000000 test.in | head -n10 5.210 5.179 6.181 sed -n '50000000,50000010p;57890010q' test.in 5.525 5.475 5.488 head -n50000010 test.in | tail -n10 8.497 8.352 8.438 sed -n '50000000,50000010p' test.in 22.826 23.154 23.195 tail -n50000001 test.in | head -n10 25.694 25.908 27.638 ed -s test.in <<<"50000000,50000010p" 31.348 28.140 30.574 awk 'NR<57890000{next}1;NR==57890010{exit}' test.in 51.359 50.919 51.127 awk 'NR >= 57890000 && NR <= 57890010' test.in These are by no means precise benchmarks, but the difference is clear and repeatable enough* to give a good sense of the relative speed of each of these commands. *: Except between the first two, sed -n p;q and head|tail, which seem to be essentially the same.
cat line X to line Y on a huge file
1,346,971,085,000
Is there a way to head/tail a document and get the reverse output; because you don't know how many lines there are in a document? I.e. I just want to get everything but the first 2 lines of foo.txt to append to another document.
You can use this to strip the first two lines: tail -n +3 foo.txt and this to strip the last two lines, if your implementation of head supports it: head -n -2 foo.txt (assuming the file ends with \n for the latter) Just like for the standard usage of tail and head these operations are not destructive. Use >out.txt if you want to redirect the output to some new file: tail -n +3 foo.txt >out.txt In the case out.txt already exists, it will overwrite this file. Use >>out.txt instead of >out.txt if you'd rather have the output appended to out.txt.
How to obtain inverse behavior for `tail` and `head`?
1,346,971,085,000
I have a file with many rows, and each row has a timestamp at the starting, like [Thread-3] (21/09/12 06:17:38:672) logged message from code..... So, I frequently check 2 things from this log file. First few rows, that has the global conditions and start time is also given. Last few rows, that has the exit status with some other info. Is there any quick handy single command that could let me display just the first and last few lines of a file?
You can use sed or awk to make it with one command. However you'll loose at speed, cause sed and awk will need to run through the whole file anyway. From a speed point of view it's much better to make a function or every time to combination of tail + head. This does have the downside of not working if the input is a pipe, however you can use proccess substitution, in case your shell supports it (look at example below). first_last () { head -n 10 -- "$1" tail -n 10 -- "$1" } and just launch it as first_last "/path/to/file_to_process" to proceed with process substitution (bash, zsh, ksh like shells only): first_last <( command ) ps. you can even add a grep to check if your "global conditions" exist.
Command to display first few and last few lines of a file
1,346,971,085,000
Variants of this question have certainly been asked several times in different places, but I am trying to remove the last M lines from a file without luck. The second most voted answer in this question recommends doing the following to get rid of the last line in a file: head -n -1 foo.txt > temp.txt However, when I try that in OSX & Zsh, I get: head: illegal line count -- -1 Why is that? How can I remove the M last lines and the first N lines of a given file?
You can remove the first 12 lines with: tail -n +13 (That means print from the 13th line.) Some implementations of head like GNU head support: head -n -12 but that's not standard. tail -r file | tail -n +13 | tail -r would work on those systems that have tail -r (see also GNU tac) but is sub-optimal. Where n is 1: sed '$d' file You can also do: sed '$d' file | sed '$d' to remove 2 lines, but that's not optimal. You can do: sed -ne :1 -e 'N;1,12b1' -e 'P;D' But beware that won't work with large values of n with some sed implementations. With awk: awk -v n=12 'NR>n{print line[NR%n]};{line[NR%n]=$0}' To remove m lines from the beginning and n from the end: awk -v m=6 -v n=12 'NR<=m{next};NR>n+m{print line[NR%n]};{line[NR%n]=$0}'
Negative arguments to head / tail
1,346,971,085,000
find command can output names of files as a null-delimited strings (if -print0 is provided), and xargs can consume them with -0 option turned on. But in between, it's hard to manipulate that collection of files - sort command has -z switch, that makes it possible to sort those files, but head and tail don't have them. How can I do head and tail on those null-delimited inputs in a convenient way? (I can always create a short & slow ruby script, but I hope that there could be a better way)
GNU head and tail since coreutils version 8.25 have a -z option for that. With older versions or for non-GNU systems, you can try and swap \0 and \n: find ... -print0 | tr '\0\n' '\n\0' | head | tr '\0\n' '\n\0' Note that some head implementations can't cope with NUL characters (and they're not required to by POSIX), but where find supports -print0, head and text utilities generally support NUL characters. You can also use a function to wrap any command between the two trs: nul_terminated() { tr '\0\n' '\n\0' | "$@" | tr '\0\n' '\n\0' } find ... -print0 | nul_terminated tail -n 12 | xargs -r0 ... Keep in mind that under nul_terminated, a \0 means a newline character. So for instance, to replace \n with _: find . -depth -name $'*\n*' -print0 | nul_terminated sed ' p;h;s,.*/,,;s/\x0/_/g;H;g;s,[^/]*\n,,' | xargs -r0n2 mv (\x0 being also a GNU extension). If you need to run more than one filtering command, you can do: find ... -print0 | nul_terminated cmd1 | nul_terminated cmd2 | xargs -r0 ... But that means running a few redundant tr commands. Alternatively, you can run: find ... -print0 | nul_terminated eval 'cmd1 | cmd2' | xargs -r0 ...
How to do `head` and `tail` on null-delimited input in bash?
1,346,971,085,000
The following shell command was expected to print only odd lines of the input stream: echo -e "aaa\nbbb\nccc\nddd\n" | (while true; do head -n 1; head -n 1 >/dev/null; done) But instead it just prints the first line: aaa. The same doesn't happen when it is used with -c (--bytes) option: echo 12345678901234567890 | (while true; do head -c 5; head -c 5 >/dev/null; done) This command outputs 1234512345 as expected. But this works only in the coreutils implementation of the head utility. The busybox implementation still eats extra characters, so the output is just 12345. I guess this specific way of implementation is done for optimization purposes. You can't know where the line ends, so you don't know how many characters you need to read. The only way not to consume extra characters from the input stream is to read the stream byte by byte. But reading from the stream one byte at a time may be slow. So I guess head reads the input stream to a big enough buffer and then counts lines in that buffer. The same can't be said for the case when --bytes option is used. In this case you know how many bytes you need to read. So you may read exactly this number of bytes and not more than that. The corelibs implementation uses this opportunity, but the busybox one does not, it still reads more byte than required into a buffer. It is probably done to simplify the implementation. So the question. Is it correct for the head utility to consume more characters from the input stream than it was asked? Is there some kind of standard for Unix utilities? And if there is, does it specify this behavior? PS You have to press Ctrl+C to stop the commands above. The Unix utilities do not fail on reading beyond EOF. If you don't want to press, you may use a more complex command: echo 12345678901234567890 | (while true; do head -c 5; head -c 5 | [ `wc -c` -eq 0 ] && break >/dev/null; done) which I didn't use for simplicity.
Is it correct for the head utility to consume more characters from the input stream than it was asked? Yes, it’s allowed (see below). Is there some kind of standard for Unix utilities? Yes, POSIX volume 3, Shell & Utilities. And if there is, does it specify this behavior? It does, in its introduction: When a standard utility reads a seekable input file and terminates without an error before it reaches end-of-file, the utility shall ensure that the file offset in the open file description is properly positioned just past the last byte processed by the utility. For files that are not seekable, the state of the file offset in the open file description for that file is unspecified. head is one of the standard utilities, so a POSIX-conforming implementation has to implement the behaviour described above. GNU head does try to leave the file descriptor in the correct position, but it’s impossible to seek on pipes, so in your test it fails to restore the position. You can see this using strace: $ echo -e "aaa\nbbb\nccc\nddd\n" | strace head -n 1 ... read(0, "aaa\nbbb\nccc\nddd\n\n", 8192) = 17 lseek(0, -13, SEEK_CUR) = -1 ESPIPE (Illegal seek) ... The read returns 17 bytes (all the available input), head processes four of those and then tries to move back 13 bytes, but it can’t. (You can also see here that GNU head uses an 8 KiB buffer.) When you tell head to count bytes (which is non-standard), it knows how many bytes to read, so it can (if implemented that way) limit its read accordingly. This is why your head -c 5 test works: GNU head only reads five bytes and therefore doesn’t need to seek to restore the file descriptor’s position. If you write the document to a file, and use that instead, you’ll get the behaviour you’re after: $ echo -e "aaa\nbbb\nccc\nddd\n" > file $ < file (while true; do head -n 1; head -n 1 >/dev/null; done) aaa ccc
head eats extra characters
1,346,971,085,000
Background I am running an SSH server and have this user that I want to delete. I cannot delete this user because he is currently running a few processes that I need to kill first. This is the pipeline I am using currently using to find out all the process ids of the user I am currently using: ps -u user | awk '{print $1;}' The output looks like this: PID 2121 2122 2124 2125 2369 2370 I want to pipe this to kill -9 to kill all processes so I can delete this stupid user like this: ps -u user | awk '{print $1;}' | sudo xargs kill -9 But this does not work because of the PID header: kill: failed to parse argument: 'PID' The question I am thinking that there has to be a simple Unix command to remove the first line of input. I am aware that I can use tail for this but I don't want to count how many lines the input contains to figure out exactly how many I want to display. I am looking for something like head or tail but inverted (instead of displaying only the first/last part of the stream it displays everything but the start/end of stream). Note I managed to solve this issue I had by simply adding | grep [[:digit:]] after my awk command but I am still looking for a way to delete first line of a file as I think would be quite useful in other scenarios.
NOTE: if your system already has pgrep/pkill then you are re-inventing the wheel here. If your system doesn't have these utilities, then you should be able to format the output of ps to get the unencumbered PID list directly e.g. ps -u user -opid= If you are already using awk, there is no need to pipe through an additional process in order to remove the first line (record): simply add a condition on the record number NR ps -u user | awk 'NR>1{print $1;}' Since you mention head and tail, the formula you probably want in this case is tail -n +2
Command to remove the first N number of lines in input
1,346,971,085,000
I need a utility that will print the first n lines, but then continue to run, sucking up the rest of the lines, but not printing them. I use it to not overwhelm the terminal with the output of a process that needs to continue to run (it writes results to a file). I figured I can do process | {head -n 100; cat > /dev/null}, but is there something more elegant?
To continue "sucking up" the output from process while only printing the first 100 (or whatever) lines: process | awk 'NR<=100' Or: process | sed -n '1,100p'
Alternative to 'head' that doesn't exit?
1,346,971,085,000
Given the following 3 scripts: printf 'a\nb\nc\n' > file && { head -n 1; cat; } < file printf 'a\nb\nc\n' | { head -n 1; cat; } { head -n 1; cat; } < <(printf 'a\nb\nc\n') I'd expect the output from each to be: a b c but for some of those, on some systems, that is not the case. For example, on cygwin: $ printf 'a\nb\nc\n' > file && { head -n 1; cat; } < file a b c $ printf 'a\nb\nc\n' | { head -n 1; cat; } a $ { head -n 1; cat; } < <(printf 'a\nb\nc\n') a What is causing the different output from those scripts? Additional info - this is apparently not just a head problem: $ printf 'a\nb\nc\n' | { sed '1q'; cat; } a $ printf 'a\nb\nc\n' | { awk '1;{exit}'; cat; } a $ { sed '1q'; cat; } < <(printf 'a\nb\nc\n') a $ { awk '1;{exit}'; cat; } < <(printf 'a\nb\nc\n') a What would be a robust, POSIX way in shell (i.e. without just invoking awk or similar once to do everything) to read some number of lines from input and leave the rest for a different command regardless of whether the input is coming from a pipe or a file? This question was inspired by comments under an answer to sort the whole .csv based on the value in a certain column.
head may read its whole input. It must read at least what it outputs (otherwise it is logically impossible to implement), but it may read more. Typically head asks the operating system to read a fixed-size buffer (by calling the read system call or similar). It then looks for newline characters in that buffer, and prints output until it reaches the desired number of lines. All POSIX compliant implementations of head call lseek to reset the file position on the input to be just after the end of the part that has been copied to the output. However this is only possible if the file is seekable: this includes ordinary files, but not pipes. If the input is a pipe, whatever head has read is discarded from the pipe and cannot be put back. This explains the difference you observed between <file (regular file) and | or <() (pipe). The relevant section of the standard above is: When a standard utility reads a seekable input file and terminates without an error before it reaches end-of-file, the utility shall ensure that the file offset in the open file description is properly positioned just past the last byte processed by the utility. For files that are not seekable, the state of the file offset in the open file description for that file is unspecified. A conforming application shall not assume that the following three commands are equivalent: tail -n +2 file (sed -n 1q; cat) < file cat file | (sed -n 1q; cat) The second command is equivalent to the first only when the file is seekable. The third command leaves the file offset in the open file description in an unspecified state. Other utilities, such as head, read, and sh, have similar properties. Some head implementations such as the head builtin of ksh93 (enabled after builtin head, and provided it was included at build time) do also try not to leave the cursor past the last line it has output when the input is not seekable, in the case of ksh93's by reading the input one byte at a time (like shells read builtins typically do) or by peeking at the contents of pipes before reading it on system that have such a possibility (not Linux). But those are rather the exception as there's a steep performance penalty.
Can `head` read/consume more input lines than it outputs?
1,346,971,085,000
Just hit this problem, and learned a lot from the chosen answer: Create random data with dd and get "partial read warning". Is the data after the warning now really random? Unfortunately the suggested solution head -c is not portable. For folks who insist that dd is the answer, please carefully read the linked answer which explains in great detail why dd can not be the answer. Also, please observe this: $ dd bs=1000000 count=10 if=/dev/random of=random dd: warning: partial read (89 bytes); suggest iflag=fullblock 0+10 records in 0+10 records out 143 bytes (143 B) copied, 99.3918 s, 0.0 kB/s $ ls -l random ; du -kP random -rw-rw-r-- 1 me me 143 Apr 22 19:19 random 4 random $ pwd /tmp
Unfortunately, to manipulate the content of a binary file, dd is pretty much the only tool in POSIX. Although most modern implementations of text processing tools (cat, sed, awk, …) can manipulate binary files, this is not required by POSIX: some older implementations do choke on null bytes, input not terminated by a newline, or invalid byte sequences in the ambient character encoding. It is possible, but difficult, to use dd safely. The reason I spend a lot of energy steering people away from it is that there's a lot of advice out there that promotes dd in situations where it is neither useful nor safe. The problem with dd is its notion of blocks: it assumes that a call to read returns one block; if read returns less data, you get a partial block, which throws things like skip and count off. Here's an example that illustrates the problem, where dd is reading from a pipe that delivers data relatively slowly: yes hello | while read line; do echo $line; done | dd ibs=4 count=1000 | wc -c On a bog-standard Linux (Debian jessie, Linux kernel 3.16, dd from GNU coreutils 8.23), I get a highly variable number of bytes, ranging from about 3000 to almost 4000. Change the input block size to a divisor of 6, and the output is consistently 4000 bytes as one would naively expect — the input to dd arrives in bursts of 6 bytes, and as long as a block doesn't span multiple bursts, dd gets to read a complete block. This suggests a solution: use an input block size of 1. No matter how the input is produced, there's no way for dd to read a partial block if the input block size is 1. (This is not completely obvious: dd could read a block of size 0 if it's interrupted by a signal — but if it's interrupted by a signal, the read system call returns -1. A read returning 0 is only possible if the file is opened in non-blocking mode, and in that case a read had better not be considered to have been performed at all. In blocking mode, read only returns 0 at the end of the file.) dd ibs=1 count="$number_of_bytes" The problem with this approach is that it can be slow (but not shockingly slow: only about 4 times slower than head -c in my quick benchmark). POSIX defines other tools that read binary data and convert it to a text format: uuencode (outputs in historical uuencode format or in Base64), od (outputs an octal or hexadecimal dump). Neither is well-suited to the task at hand. uuencode can be undone by uudecode, but counting bytes in the output is awkward because the number of bytes per line of output is not standardized. It's possible to get well-defined output from od, but unfortunately there's no POSIX tool to go the other way round (it can be done but only through slow loops in sh or awk, which defeats the purpose here).
What's the POSIX way to read an exact number of bytes from a file?
1,346,971,085,000
Possible Duplicate: IO redirection and the head command I just wanted to remove all but the first line of a file. I did this: head -1 foo.txt ... and verified that I saw only the first line. Then I did: head -1 foo.txt > foo.txt But instead of containing only the first line, foo.txt was now empty. Turns out that cat foo.txt > foo.txt also empties the file. Why?
Before the shell starts processing any data, it needs to make sure all the input and output is squared away. So in your case using > foo.txt basically tells the system: "create a (new) file named foo.txt and stick all the output from this command into that file". The problem is, as you found out, that that wipes out the previous contents. Related, >> will append to an existing file. Update: Here's a solution using sed, handle with care: sed -i '2,$d' foo.txt It will delete lines 2 to "last" in-place in file foo.txt. Best to try this out on a file you can afford to mess up first :) This slightly modified version of the command will keep a copy of the original with the .bak extension: sed -i.bak '2,$d' foo.txt You can specify any sequence of characters (or a single character) after the -i command line switch for the name of the "backup" (ie original) file.
Why does `cat`ing a file into itself erase it? [duplicate]
1,346,971,085,000
I can do diff filea fileb to see the difference between files. I can also do head -1 filea to see the first line of filea or fileb. How can I combine these commands to show the difference between the first line of filea and the first line of fileb?
If your shell supports process substitution, try: diff <(head -n 1 filea) <(head -n 1 fileb)
Compare the heads of two files in bash
1,346,971,085,000
The following commands seem to be roughly equivalent: read varname varname=$(head -1) varname=$(sed 1q) One difference is that read is a shell builtin while head and sed aren't. Besides that, is there any difference in behavior between the three? My motivation is to better understand the nuances of the shell and key utilities like head,sed. For example, if using head is an easy replacement for read, then why does read exist as a builtin?
Neither efficiency nor builtinness is the biggest difference. All of them will return different output for certain input. head -n1 will provide a trailing newline only if the input has one. sed 1q will always provide a trailing newline, but otherwise preserve the input. read will never provide a trailing newline, and will interpret backslash sequences. Additionally, read has additional options, such as splitting, timeouts, and input history, some of which are standard and others vary between shells.
Is there a difference between read, head -1, and sed 1q?
1,346,971,085,000
I'm currently using watch head -n 17 * which works, but also shows all lines up to the 17th. Basically, I would like to only show the last line for each file that is shown with my current approach. How can I achieve that? Example For the sake of example, let's reduce the line nr. to 7. So: Example file: 1 2 3 4 5 6 7 8 this line: watch head -n 7 * outputs 1 2 3 4 5 6 7 where I want: 7
With GNU awk: watch -x gawk ' FNR == 17 {nextfile} ENDFILE {if (FNR) printf "%15s[%02d] %s\n", FILENAME, FNR, $0}' ./* Which gives an output like: ./file1[17] line17 ./short-file2[05] line 5 is the last Note that the ./* glob is expanded only once at the time watch is invoked. Your watch head -n 17 * was an arbitrary command injection vulnerability as the expansion of that * was actually interpreted as shell code by the shell that watch invokes to interpret the concatenation of its arguments with spaces. If there was a file called $(reboot) in the current directory, it would reboot. With -x, we're telling watch to skip the shell and execute the command directly. Alternatively, you could do: watch 'exec gawk '\'' FNR == 17 {nextfile} ENDFILE {if (FNR) printf "%15s[%02d] %s\n", FILENAME, FNR, $0}'\'' ./*' For watch to run a shell which would expand that ./* glob at each iteration. watch foo bar is in effect the same as watch -x sh -c 'foo bar'. When using watch -x, you can specify which shell you want and for instance pick a more powerful one like zsh that can do recursive globbing and restrict to regular files: watch -x zsh -c 'awk '\''...'\'' ./**/*(.)' Without gawk, you could still do something like: watch ' for file in ./*; do [ -s "$file" ] || continue printf "%s: " "$file" head -n 17 < "$file" | tail -n 1 done' Giving an ouput like: ./file1: line17 ./short-file2: line 5 is the last But that would be a lot less efficient as it implies running several commands per file.
How can I watch the 17th (or last, if less) line in files of a folder?
1,346,971,085,000
I have a directory with ~1M files and need to search for particular patterns. I know how to do it for all the files: find /path/ -exec grep -H -m 1 'pattern' \{\} \; The full output is not desired (too slow). Several first hits are OK, so I tried to limit number of the lines: find /path/ -exec grep -H -m 1 'pattern' \{\} \; | head -n 5 This results in 5 lines followed by find: `grep' terminated by signal 13 and find continues to work. This is well explained here. I tried quit action: find /path/ -exec grep -H -m 1 'pattern' \{\} \; -quit This outputs only the first match. Is it possible to limit find output with specific number of results (like providing an argument to quit similar to head -n)?
Since you're already using GNU extensions (-quit, -H, -m1), you might as well use GNU grep's -r option, together with --line-buffered so it outputs the matches as soon as they are found, so it's more likely to be killed of a SIGPIPE as soon as it writes the 6th line: grep -rHm1 --line-buffered pattern /path | head -n 5 With find, you'd probably need to do something like: find /path -type f -exec sh -c ' grep -Hm1 --line-buffered pattern "$@" [ "$(kill -l "$?")" = PIPE ] && kill -s PIPE "$PPID" ' sh {} + | head -n 5 That is, wrap grep in sh (you still want to run as few grep invocations as possible, hence the {} +), and have sh kill its parent (find) when grep dies of a SIGPIPE. Another approach could be to use xargs as an alternative to -exec {} +. xargs exits straight away when a command it spawns dies of a signal so in: find . -type f -print0 | xargs -r0 grep -Hm1 --line-buffered pattern | head -n 5 (-r and -0 being GNU extensions). As soon as grep writes to the broken pipe, both grep and xargs will exit and find will exit itself as well the next time it prints something after that. Running find under stdbuf -oL might make it happen sooner. A POSIX version could be: trap - PIPE # restore default SIGPIPE handler in case it was disabled RE=pattern find /path -type f -exec sh -c ' for file do awk '\'' $0 ~ ENVIRON["RE"] { print FILENAME ": " $0 exit }'\'' < "$file" if [ "$(kill -l "$?")" = PIPE ]; then kill -s PIPE "$PPID" exit fi done' sh {} + | head -n 5 Very inefficient as it runs several commands for each file.
limit find output AND avoid signal 13
1,346,971,085,000
head -num is the same as head -n num instead of head -n -num (where num is any number) Example: $ echo -e 'a\nb\nc\nd'|head -1 a $ echo -e 'a\nb\nc\nd'|head -n 1 a $ echo -e 'a\nb\nc\nd'|head -n -1 a b c This head -1 doesn't seem to be documented anywhere. $ head --help Usage: head [OPTION]... [FILE]... Print the first 10 lines of each FILE to standard output. With more than one FILE, precede each with a header giving the file name. With no FILE, or when FILE is -, read standard input. Mandatory arguments to long options are mandatory for short options too. -c, --bytes=[-]NUM print the first NUM bytes of each file; with the leading '-', print all but the last NUM bytes of each file -n, --lines=[-]NUM print the first NUM lines instead of the first 10; with the leading '-', print all but the last NUM lines of each file -q, --quiet, --silent never print headers giving file names -v, --verbose always print headers giving file names -z, --zero-terminated line delimiter is NUL, not newline --help display this help and exit --version output version information and exit NUM may have a multiplier suffix: b 512, kB 1000, K 1024, MB 1000*1000, M 1024*1024, GB 1000*1000*1000, G 1024*1024*1024, and so on for T, P, E, Z, Y. GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Full documentation at: <https://www.gnu.org/software/coreutils/head> or available locally via: info '(coreutils) head invocation' Man page for head (on Fedora 28): HEAD(1) User Commands HEAD(1) NAME head - output the first part of files SYNOPSIS head [OPTION]... [FILE]... DESCRIPTION Print the first 10 lines of each FILE to standard output. With more than one FILE, precede each with a header giving the file name. With no FILE, or when FILE is -, read standard input. Mandatory arguments to long options are mandatory for short options too. -c, --bytes=[-]NUM print the first NUM bytes of each file; with the leading '-', print all but the last NUM bytes of each file -n, --lines=[-]NUM print the first NUM lines instead of the first 10; with the leading '-', print all but the last NUM lines of each file -q, --quiet, --silent never print headers giving file names -v, --verbose always print headers giving file names -z, --zero-terminated line delimiter is NUL, not newline --help display this help and exit --version output version information and exit NUM may have a multiplier suffix: b 512, kB 1000, K 1024, MB 1000*1000, M 1024*1024, GB 1000*1000*1000, G 1024*1024*1024, and so on for T, P, E, Z, Y. AUTHOR Written by David MacKenzie and Jim Meyering. REPORTING BUGS GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report head translation bugs to <https://translationproject.org/team/> COPYRIGHT Copyright © 2017 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO tail(1) Full documentation at: <https://www.gnu.org/software/coreutils/head> or available locally via: info '(coreutils) head invocation' GNU coreutils 8.29 December 2017 HEAD(1)
The info page and the online manual for GNU head contain this part: For compatibility head also supports an obsolete option syntax -[NUM][bkm][cqv], which is recognized only if it is specified first. The idea that head -1 is the same as head -n 1 is that the dash is not a minus sign, but a marker for a command line option. That's the usual custom: things that start with dashes are options controlling how to do processing, other stuff in the command line are file names or other actual targets to process. In this case, it's not a single-character option, but a shorthand for -n, but it's still basically an option, and not a filename. In head +1 or head 1, the +1 or 1 would be taken as file names, however. A double dash -- or --something also has a distinct meaning, by itself (--) it stops option processing, and when followed by something else, it marks a GNU style long option. So having head --1 for head -n -1 wouldn't match the custom. If I were to guess, I'd assume the quaint shortcut for -n i exists for positive i but not for negative i since the former case is useful more often and easier to implement. (Besides, the standard head is only defined for a positive value of lines.)
Why isn't "head -1" equivalent with "head -n -1" but instead it's the same as "head -n 1"?
1,346,971,085,000
I've got 'color cat' working nicely, thanks to others (see How can i colorize cat output including unknown filetypes in b&w?). In my .bashrc: cdc() { for fn in "$@"; do source-highlight --out-format=esc -o STDOUT -i $fn 2>/dev/null || /bin/cat $fn; done } alias cat='cdc' # To be next to the cdc definition above. I'd like to be able to use this technique for other functions like head, tail and less. How could I do that for all four functions? Any way to generalize the answer? I have an option for gd doing git diff using gd() { git diff -r --color=always "$@" }
Something like this should do what you want: for cmd in cat head tail; do cmdLoc=$(type $cmd | awk '{print $3}') eval " $cmd() { for fn in \"\$@\"; do source-highlight --failsafe --out-format=esc -o STDOUT -i \"\$fn\" | $cmdLoc - done } " done You can condense it like this: for cmd in cat head tail; do cmdLoc=$(type $cmd |& awk '{print $3}') eval "$cmd() { for fn in \"\$@\"; do source-highlight --failsafe --out-format=esc -o STDOUT -i \"\$fn\" | $cmdLoc - ; done }" done Example With the above in a shell script, called tst_ccmds.bash. #!/bin/bash for cmd in cat head tail; do cmdLoc=$(type $cmd |& awk '{print $3}') eval "$cmd() { for fn in \"\$@\"; do source-highlight --failsafe --out-format=esc -o STDOUT -i \"\$fn\" | $cmdLoc - ; done }" done type cat type head type tail When I run this, I get the functions set as you'd asked for: $ ./tst_ccmds.bash cat () { for fn in "$@"; do source-highlight --failsafe --out-format=esc -o STDOUT -i "$fn" 2> /dev/null | /bin/cat - ; done } head is a function head () { for fn in "$@"; do source-highlight --failsafe --out-format=esc -o STDOUT -i "$fn" 2> /dev/null | /usr/bin/head - ; done } tail is a function tail () { for fn in "$@"; do source-highlight --failsafe --out-format=esc -o STDOUT -i "$fn" 2> /dev/null | /usr/bin/tail -; done } In action When I use these functions in my shell (source ./tst_ccmds.bash) they work as follows: cat head tail plain text What's the trick? The biggest trick, and I would call it more of a hack, is the use of a dash (-) as an argument to cat, head, and tail through a pipe which forces them to output the content that came from source-highlight through STDIN of the pipe. This bit: ...STDOUT -i "$fn" | /usr/bin/head - .... The other trick is using the --failsafe option of source-highlight: --failsafe if no language definition is found for the input, it is simply copied to the output This means that if a language definition is not found, it acts like cat, simply copying its input to the standard output. Note about aliases This function will fail if any of head,tail or cat are aliases because the result of the type call will not point to the executable. If you need to use this function with an alias (for example, if you want to use less which requires the -R flag to colorize) you will have to delete the alias and add the aliased command separately: less(){ for fn in "$@"; do source-highlight --failsafe --out-format=esc -o STDOUT -i "$fn" | /usr/bin/less -R || /usr/bin/less -R "$fn"; done }
How can I colorize head, tail and less, same as I've done with cat?
1,346,971,085,000
Say I only need the first 5 lines of an output for logging purposes. I also need to know if and when the log has been truncated. I am trying to use head to do the job, the seq command below outputs 20 lines that get truncated by head, and I echo a truncating information: > seq -f 'log line %.0f' 20 | head -n 5 && echo '...Output truncated. Only showing first 5 lines...' log line 1 log line 2 log line 3 log line 4 log line 5 ...Output truncated. Only showing first 5 lines... But if the seq command outputs less than 5 lines, using the same above construction, I get a wrong "truncated" status: seq -f ' log line %.0f' 3 | head -n 5 && echo '...Output truncated. Only showing first 5 lines...' log line 1 log line 2 log line 3 ...Output truncated. Only showing first 5 lines... Is there a way for the head command (or another tool) to tell me if it truncated anything so that I only display the "...truncated..." message when needed?
A note of warning: When you do: cmd | head and if the output is truncated, that could cause cmd to be killed by a SIGPIPE, if it writes more lines after head has exited. If it's not what you want, if you want cmd to keep running afterwards, even if its output is discarded, you'd need to read but discard the remaining lines instead of exiting after 10 lines have been output (for instance, with sed '1,10!d' or awk 'NR<=10' instead of head). So, for the two different approaches: output truncated, cmd may be killed cmd | awk 'NR>5 {print "TRUNCATED"; exit}; {print}' cmd | sed '6{s/.*/TRUNCATED/;q;}' Note that the mawk implementation of awk accumulates a buffer-full of input before starting processing it, so cmd may not be killed until it has written a buffer-full (8KiB on my system AFAICT) of data. That can be worked-around by using the -Winteractive option. Some sed implementations also read one line in advance (to be able to know which is the last line when using the $ address), so with those, cmd may only be killed after it has output its 7th line. output truncated, the rest discarded so cmd is not killed cmd | awk 'NR<=5; NR==6{print "TRUNCATED"}' cmd | sed '1,6!d;6s/.*/TRUNCATED/'
Truncate output after X lines and print message if and only if output was truncated
1,346,971,085,000
If I do a svn log | head after the tenth line of output I get an error message: svn: Write error: Broken pipe What's going on here? I haven't seen any other command do this when used with head. Is Subversion unfriendly to the Unix filtering paradigm?
When you write to a pipe whose other end has been closed, you normally receive a SIGPIPE signal and die. However, if you choose to ignore that signal, as svn does, then instead the write returns with -1 and errno set to EPIPE whose English translation is "Broken pipe". And svn chooses to display that error message when it fails to write something to its standard output. head terminates after it has written 10 lines from its input and as a result closes the pipe. svn won't be able to write any more to that pipe. Most applications then die silently as the default behaviour when they're not ignoring SIGPIPE. svn for some reason (maybe because it needs to do extra things before dying) chooses to ignore the SIGPIPE and determine that it can't write anymore to the pipe by checking the error status of the write to the pipe. You get the same error with: bash -c 'trap "" PIPE; while echo foo; do :;done' | head See: strace -e write seq 10000 | head (on Linux) to see what the default behaviour is when you're not ignoring SIGPIPE.
Why does Subversion give a broken pipe error when piped into head?
1,346,971,085,000
I'm helping the netadmin here with a perl regex to automate operating on some snapshots from our SAN and our scripts does stuff like this: varinit1=$(iscsiadm -m session | grep rbmsdata1 | head -n1 | perl -pe 's/^tcp: \[\d*\] \d*\.\d*\.\d*\.\d*:\d*,\d* (iqn\..*\..*\..*:.*-.*-.*-.*-(.*-.*-\d{4}-\d{2}-\d{2}-\d{2}:\d{2}:\d{2}\.\d*\.\d*))$/$1/') varsnap1=$(iscsiadm -m session | grep rbmsdata1 | head -n1 | perl -pe 's/^tcp: \[\d*\] \d*\.\d*\.\d*\.\d*:\d*,\d* (iqn\..*\..*\..*:.*-.*-.*-.*-(.*-.*-\d{4}-\d{2}-\d{2}-\d{2}:\d{2}:\d{2}\.\d*\.\d*))$/$2/') There are two pieces in the signature of the snapshot, one nested in the other and we're using the capturing groups to capture the name and a piece of the name for different subsequent commands which need to be performed. I know it's running the same command over and over, and the regex can be cleaned up later, but basically they are using perl to output one parens and and the other. tcp: [32] 40.40.40.101:3260,1 iqn.2001-05.com.equallogic:4-52aed6-91c5ffa78-2f0d8ae18504fee1-r12prd-rbmsdata1-2012-06-29-16:07:40.108.1 tcp: [33] 40.40.40.101:3260,1 iqn.2001-05.com.equallogic:4-52aed6-91c5ffa78-2f0d8ae18504fee1-r12prd-rbmsdata1-2012-06-29-16:07:40.108.1 Wanting to capture out of that which was the result of the icsiadm and grep to get this: iqn.2001-05.com.equallogic:4-52aed6-91c5ffa78-2f0d8ae18504fee1-r12prd-rbmsdata1-2012-06-29-16:07:40.108.1 and r12prd-rbmsdata1-2012-06-29-16:07:40.108.1 The problem we're having is that sometimes the piping to head to get the first line is failing with: head: cannot open '–n1' for reading: No such file or directory Of course, this appears to indicate that the stdin to head is empty so it's looking for a file name. But there's no reason for it to ever be empty. If we do things like this: varinit1=$(iscsiadm -m session | grep rbmsdata1 | head -n1) varsnap1=$(iscsiadm -m session | grep rbmsdata1 | head -n1) the second one will fail and the second variable will be empty. Yet if we reverse them, then varsnap1 will fail: varsnap1=$(iscsiadm -m session | grep rbmsdata1 | head -n1) varinit1=$(iscsiadm -m session | grep rbmsdata1 | head -n1) It's very peculiar and we can't figure out what's going on. The iscsiadm command is returning the same thing each run when we run it from the command-line, and after grepping. Is there something messing up the piping? head version 5.97 on RedHat Enterprise Linux
While your question maybe contains an error (long utf8 dash instead of the normal one): $ head –n1 head: cannot open ‘–n1’ for reading: No such file or directory $ head -n1 # ctrl-d $ I will supposed that was just a browser thing, since only one occurence was like that. head waits for input when it needs it anyway. Try replacing head -n1 with one of these: sed -n 1p awk 'NR==1 {print}' # yay, no potential dash problems Ok, there are plenty more ways of doing it, but you could also skip that pipe element and just tell grep to return only the first match by adding the -m 1 parameter. Or eliminating two elements and telling perl to operate on the first matching line only.
Peculiar piping grep/head behavior
1,346,971,085,000
I'm using the ubuntu terminal, I need to move a specific line in a file ( 11th position ) to the first line and then transfer the final result into a new file. The original file contains hundreds of lines. So far I tried to use the sed command tool, but I'm not achieving what I want. This is what I got so far : [mission 09] $ sed -n -e '1p' -e '11p' bonjour > bonjour2 But it only displays the first and 11th line into the new file. But I want the new file to have the revised position wanted with the rest of the original lines. Input : English: Hello Turkish: Marhaba Italian: Ciao German: Hallo Spanish: Hola Latin: Salve Greek: chai-ray Welsh: Helo Finnish: Hei Breton: Demat French: Bonjour Desired output French: Bonjour English: Hello Turkish: Marhaba Italian: Ciao German: Hallo Spanish: Hola Latin: Salve Greek: chai-ray Welsh: Helo Finnish: Hei Breton: Demat Any suggestion ?
sed -n '1h;2,10H;11G;11,$p' First line, copy h because of new line, then append H until 10. At line 11, get the hold space From 11 to end, print. ]# sed -n '1h;2,10H;11G;11,$p' bonj French: Bonjour English: Hello Turkish: Marhaba Italian: Ciao German: Hallo Spanish: Hola Latin: Salve Greek: chai-ray Welsh: Helo Finnish: Hei Breton: Demat This is nicer: ]# seq 20 | sed -n '1h;2,10H;11G;11,$p' 11 1 2 3 4 5 6 7 8 9 10 12 13 14 15 16 17 18 19 20 I take your example: ]# sed -e 1p -e 11p -n bonj English: Hello French: Bonjour ...with the -n switch at the end just to show it counts for both expressions. I have also -n, and then 1h;2,10H, which should be just 1,10H, which is a range of line numbers and a "hold" (store) command. Nothing gets printed, yet. 11,$p is another range. On line 11 it prints what `11G' just got back from hold (ie 1-10) and appended to line 11. Lines 12 until $ just print themselves, because of -n. I should make two -e like you: sed -n -e '1h;2,10H' -e '11G;11,$p' From 1,10 is hold, from 11,$ is print. Line 11 has the G first, then the p. It matters, because: ]# seq 20 | sed -n -e '1h;2,10H' -e '11,$p;11G' 11 12 13 14 15 16 17 18 19 20 Here line 12 wipes out what line 11 has gotten after printing. As a function with params Always line eleven is boring with a function "putfirst": ]# declare -f putfirst putfirst () { e="1h;2,$(($1-1))H;${1}G;${1},\$p"; sed -ne "$e" $2 } Two steps: string generation, then the sed call. "$" has two meanings: "p" is not a variable! This is the lowest number that works: ]# seq 7 | putfirst 3 3 1 2 4 5 6 7 Or with the original "bonj" file: ]# putfirst 4 bonj | putfirst 6 |head -4 Latin: Salve German: Hallo English: Hello Turkish: Marhaba This is two seds in a row, but now doing two operations. Perl perl -ne '$n++; $h.=$_ if $n<11; print $_.$h if $n==11; print if $n>11' <(seq 20) And as some script, which takes a file name and needs no option: $want=11 ; while (<>) { $n++ ; if ($n < $want) # before $want: store line { $lowlines .= $_ ; next } # next line (avoids 'else') if ($n == $want) # at line $want: append stored lines to $_ { $_ .= $lowlines } print ; # print $_ for $n not less than $want } AWK (stolen from Ed (not the editor!)) NR < 11 { buf[NR] = $0; next } NR >=11 { print if (NR == 11) { for (i=1; i<11; i++) { print buf[i] } } } I used NR instead of incrementing n, and made the flow more explicit. Same "trick": the next simplifies downstream. With perl -n $n++ ; $tmp = $tmp . $_ if $n < 11 ; print $_ . $tmp if $n == 11 ; print $_ if $n > 11 ; This is the best format. Symmetrical.
Move specific line from a position to another
1,346,971,085,000
For example, we have N files (file1, file2, file3 ...) We need first 20% of them, the result directory should be like (file1_20, file2_20, file3_20 ...). I was thinking use wc to get the lines of the file, then times 0.2 Then use head to get 20% and then redirect to a new file, but i don't know how to automate it.
So creating a single example to work from: root@crunchbang-ibm3:~# echo {0..100} > file1 root@crunchbang-ibm3:~# cat file1 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 We can grab the size of the file in bytes with stat: root@crunchbang-ibm3:~# stat --printf %s "file1" 294 And then using bc we can multipy the size by .2 root@crunchbang-ibm3:~# echo "294*.2" | bc 58.8 However we get a float so lets convert it to an integer for head ( dd might work here too ): root@crunchbang-ibm3:~# printf %.0f "58.8" 59 And finally the first twenty percent (give or take a byte) of file1: root@crunchbang-ibm3:~# head -c "59" "file1" 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 Putting it together we could then do something like this mkdir -p a_new_directory for f in file*; do file_size=$(stat --printf %s "$f") percent_size_as_float=$(echo "$file_size*.2" | bc) float_to_int=$(printf %.0f "$percent_size_as_float") grab_twenty=$(head -c "$float_to_int" "$f") new_fn=$(printf "%s_20" "$f") # new name file1_20 printf "$grab_twenty" > a_new_directory/$new_fn done where f is a place holder for any items found in the directory in which the for loop is run that matches file* which when done: root@crunchbang-ibm3:~# cat a_new_directory/file1_20 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 update (to grab 20% of lines): To grab the first approximate 20% of lines we could replace stat --printf %s "$f" with: wc -l < "$f" Since we are using printf and bc we can effectively round up from .5, however if a file is only 1 or 2 lines long it will miss them. So we would want to not only round up, but default to at least grabbing 1 line.
Copy a specific percentage of each file in a directory to a new file
1,346,971,085,000
There are lots of txt files in a directory. If I do time wc -l *.txt | head it takes real 0m0.032s user 0m0.020s sys 0m0.008s If I do time wc -l *.txt | tail it takes real 0m0.156s user 0m0.076s sys 0m0.088s Does this mean that wc will know beforehand that it is piping to head and will only count for first 10 files and save time? In other words, is it aware of the pipe? And is this somethign special about wc or does it apply to all standard/built-in commands?
I did a strace on both commands. The interessting thing is that when you pipe the output to head there are only 123 system calls. On the other hand when pipeing to tail there are 245 system calls (or more when there are more *.txt files). Case: head Here are the last few lines when pipeing to head: open("file12.txt", O_RDONLY) = 3 fadvise64(3, 0, 0, POSIX_FADV_SEQUENTIAL) = 0 read(3, "", 16384) = 0 write(1, "0 file12.txt\n", 13) = -1 EPIPE (Broken pipe) --- SIGPIPE (Broken pipe) @ 0 (0) --- +++ killed by SIGPIPE +++ When wc tries to write the output of the 12th file it gets an error EPIPE. Thats why head exited after getting the 11th line. When head exits, wc gets SIGPIPE. As seen in the strace output above, wc first tries to write to that pipe (where head no longer reads from) and gets an error that the pipe is broken. The SIGPIPE signal is sent to a process when it attempts to write to a pipe without a process connected to the other end. -- from Wikipedia Case: tail When pipeing to tail, there is nothing like that above. wc ends gracefully after writingall the output to the pipe where tail needs to be connected to for the whole time. tail needs all lines before it can print the last 10 of them. When there is no more output to read, tail prints the lines and exits gracefully too
Time required to do pipe output to head/tail [duplicate]