date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,490,442,686,000 |
I have a tab delimited file with four columns. I would like to grep for the lines that have a specific pattern in column 1, where it says apple M of N.
I only want to extract the lines that have the first number matching the second number, or lines that have the first number one less than the second number.
In the example below, rows 2, 3, and 5 (not counting the header row)
are the ones that fit the pattern.
Col1 col2 col3 col4
apple (XY_012345, apple 6 of 10) 1 12228 12612
apple (XY_678901, apple 5 of 6) 1 12722 13220
apple (XY_234567, apple 2 of 2) 1 18437 24737
apple (XY_890123, apple 8 of 30) 1 24892 29269
apple (XY_456789, apple 12 of 12) 1 35175 35276
|
Similar thing in GNU awk:
$ gawk 'match($0, /([0-9]+) of ([0-9]+)/, a) && (a[2] == a[1] || a[2] == a[1]+1)' file
apple (XY_678901, apple 5 of 6) 1 12722 13220
apple (XY_234567, apple 2 of 2) 1 18437 24737
apple (XY_456789, apple 12 of 12) 1 35175 35276
| How to grep a string with any two numbers that match or that have the first number one less than the second number? |
1,490,442,686,000 |
There are two columns of numbers in a file, the first line like this:
0 0.0
I want to add the numbers in column 1 to those in column 2, and I want to keep the results floats, not integers. So the result of the first line should be 0.0, and the other lines' results should have .0 even if the sums are integers.
I tried awk:
awk '{printf "%.1g\n", $0=$1+$2}' > sum.txt
Although I told it to keep one digit after the decimal point by "%.1g\n", it still gives me integer 0.
|
I can't see this mentioned in the manual for GNU awk, but e.g. man page for the printf() function in glibc mentions about %g that:
Trailing zeros are removed from the fractional part of the result; a decimal point appears only if it is followed by at least one digit.
Also, there's the # modifier for an "alternate format", which says:
For g and G conversions, trailing zeros are not removed from the result as they would otherwise be.
Also, %g automatically changes to the 1.23e+3 format as necessary, and the precision field (the number after the dot) is "the maximum number of significant digits for g and G conversions" (not digits after the decimal point), so we get:
printf "%#.1g\n", 0 0.
printf "%#.2g\n", 0 0.0
printf "%#.2g\n", 12 12.
printf "%#.2g\n", 1234 1.2e+03
so while you can force the decimal point to appear, you can't force a trailing zero to appear.
An alternative would be to use %f instead:
printf "%.1f\n", 0 0.0
printf "%.1f\n", 1234 1234.0
Or to add the missing zero after the dot as necessary:
a = sprintf("%#.1g", 0); if (substr(a, length(a)) == ".") a = a "0";
| When I add 0 to 0.0, how do I make sure the result is 0.0 not 0? |
1,490,442,686,000 |
How can I check if a specific string is a floating points?
This are possible floating points:
12.245
+.0009
3.11e33
43.1E11
2e-14
This is what I tried:
grep "^[+\-\.0-9]"
grep "^[+-]*[0-9]"
grep "^[+\-\.0-9]"
And other lots of related things, but none filtered anything at all. Almost every string got through. How would I tackle this problem?
|
grep -xE '[-+]?[0123456789]*\.?[0123456789]+([eE][-+]?[0123456789]+)?'
With -x, we're anchoring the regexp at the start and and of the line so the lines have to match that pattern as a whole as opposed to the pattern being found anywhere in the line.
If you wanted to match on all the ones supported by POSIX/C strtod() as recognised by many implementations of the printf utility for instance:
r=[$(locale decimal_point)]
d=[0123456789]
h=[0123456789abcdefABCDEF]
grep -xE "[[:space:]]*[-+]?($d*$r?$d+([eE][-+]?$d+)?|\
0[xX]$h*$r?$h*([pP][-+]?$d+)?|\
[iI][nN][fF]([iI][nN][iI][tT][yY])?|\
[nN][aA][nN]|\
NAN\([abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789_]+\))"
So also including things like 0x3f, 0xFP-4, -Infinity, NAN(whatever).
$ printf '%g\n' 0x3f 0xFp-4 -Infinity 'NAN(whatever)'
63
0.9375
-inf
nan
| Only allow floating points regex |
1,490,442,686,000 |
If I run the following in bash, i will get correct answer
# if [ 2.0000000000000000000000000001 > 2 ] ; then echo " True "; else echo " False " ; fi
True
#
But if run in python IDLE
>>> if 2.00000000000000001 > 2.0:
print "true"
else:
print "false"
false
>>>
Python can't compare the number right?
I think I have the answer. Python only use the 10 digs for the floating numbers.
>>> c=2.00000000001232
>>> print c
2.00000000001
>>> d= 2.00000000003234
>>> print d
2.00000000003
>>> e=2.000000000049
>>> print e
2.00000000005
>>>
Thanks guys.
|
You haven't successfully compared the numbers in bash, you've only tested that "bash" (the test command) has successfully tested the length of the string 2.0000000000000000000000000001 and redirected the non-existent output into a file named 2.
You would want the -gt operator, except:
[ 2.0000000000000000000000000001 -gt 2 ] && echo yes
-bash: [: 2.0000000000000000000000000001: integer expression expected
You might think of:
[[ 2.0000000000000000000000000001 > 2 ]] && echo yes
and get:
yes
except you're not comparing integers at that point, you're sorting strings:
[[ 9 > 11 ]] && echo yes
yes
To compare floating point numbers in Python, see Stack Overflow for more details, like this one
| Python and Bash compare numbers |
1,490,442,686,000 |
Input:
network-snapshot-000000 time 6m 40s fid50k_full 34.9546
network-snapshot-000201 time 6m 52s fid50k_full 30.8073
network-snapshot-000403 time 6m 51s fid50k_full 33.3470
network-snapshot-000604 time 6m 51s fid50k_full 32.7172
network-snapshot-000806 time 6m 49s fid50k_full 30.3764
Output:
network-snapshot-000000 time 6m 40s fid50k_full 34.9546
network-snapshot-000201 time 6m 52s fid50k_full 30.8073*
network-snapshot-000403 time 6m 51s fid50k_full 33.3470
network-snapshot-000604 time 6m 51s fid50k_full 32.7172
network-snapshot-000806 time 6m 49s fid50k_full 30.3764*
|
$ awk 'NR == 1 { min = $NF } ($NF < min) { min = $NF; $0 = $0 "*" }; 1' file
network-snapshot-000000 time 6m 40s fid50k_full 34.9546
network-snapshot-000201 time 6m 52s fid50k_full 30.8073*
network-snapshot-000403 time 6m 51s fid50k_full 33.3470
network-snapshot-000604 time 6m 51s fid50k_full 32.7172
network-snapshot-000806 time 6m 49s fid50k_full 30.3764*
This initializes the smallest found value, min, to the first value in the last column if we're currently reading the first line (NR == 1). Then, for each input line, if the value in the last column is strictly smaller than our min value, the min value is replaced and the current line gets a * appended.
Every line is then unconditionally outputted.
| Highlight every number that is smaller than all previous numbers in the last column |
1,490,442,686,000 |
This question is a derivative of the https://askubuntu.com/questions/601149/is-there-a-command-to-round-decimal-numbers-in-txt-files, which was successfully solved by using:
perl -i -pe 's/(\d*\.\d*)/int($1+0.5)/ge' file
The problem is, the header of my CSV file is also modified by the perl's oneliner above, which is inconvenient for me. Is there any way to skip the first line or row of the CSV file in this oneliner?
|
Perl has a special variable $. that keeps track of the current line number. So you can add a simple conditional $. > 1 to the substitution:
perl -i -pe 's/(\d*\.\d*)/int($1+0.5)/ge if $. > 1' file
See PerlVar: Variables related to filehandles
Other tools have explicit header handling ex. with numfmt from GNU Coreutils:
numfmt -d, --header --format='%.0f' --field=- --invalid=ignore <file
(rounding is IEEE from-zero by default).
| Rounding numbers of a CSV file, skipping the header |
1,490,442,686,000 |
I have a file with 4 columns. First two columns are for x and y position (in integers) and third, fourth column for an arbitrary field value.
1 1 0.5 1.2
1 2 1.7 2.3
1 3 2.0 2.2
2 1 1.4 2.5
2 2 1.6 3.0
2 3 2.35 2.9
3 1 2.0 2.9
3 2 0.7 2.5
3 3 0.2 2.1
To this input file, I want to add two columns between second and third columns
For each value of x,y in first two columns a z value from 1 to 3 should be added as a third column. Finally, a fourth column should be added with a value such that
if (z<$3 ) value = 0
if (z>=$3 && z <=$4) value = 1
if (z >$4) value = 2
note that $3 are $4 are the column values in the input file and $4 is always greater than $3.
The output file should look like
1 1 1 1 0.5 1.2
1 1 2 2 0.5 1.2
1 1 3 2 0.5 1.2
1 2 1 0 1.7 2.3
1 2 2 1 1.7 2.3
1 2 3 2 1.7 2.3
1 3 1 0 2.0 2.2
1 3 2 1 2.0 2.2
1 3 3 2 2.0 2.2
2 1 1 2 1.4 2.5
2 1 2 2 1.4 2.5
2 1 3 2 1.4 2.5
2 2 1 0 1.6 3.0
2 2 2 1 1.6 3.0
2 2 3 1 1.6 3.0
2 3 1 0 2.35 2.9
2 3 2 0 2.35 2.9
2 3 3 2 2.35 2.9
3 1 1 0 2.0 2.9
3 1 2 1 2.0 2.9
3 1 3 2 2.0 2.9
3 2 1 1 0.7 2.5
3 2 2 1 0.7 2.5
3 2 3 2 0.7 2.5
3 3 1 1 0.2 2.1
3 3 2 1 0.2 2.1
3 3 3 2 0.2 2.1
How to create such output file with awk?
|
What you appear to be asking for is:
awk '{
y = $2;
for(z=1;z<=3;z++){
value = z < $3 ? 0 : z > $4 ? 2 : 1;
$2 = y OFS z OFS value;
print
}
}' file
however I can't make it produce the output shown.
| add multiple rows for a range of variable and condtional column to a file |
1,490,442,686,000 |
I want to be able to go through and access each digit individually in a shell script. How would I do this?
|
You can treat numbers as strings, because fundamentally that is what they are*:
$ number=42
$ echo "${number:1:1}"
2
$ echo "${number:0:1}"
4
* Unless you declare the variable as an integer (for example in Bash), in which case it's converted to a decimal number before you can treat it as a string. For example an octal number:
$ declare -i number=042
$ echo "${number:0:1}"
3
$ echo "${number:1:1}"
4
None of this applies to decimal or floating point numbers, which *nix shells do not support directly. For that you'd want to look into bc.
You can also treat a string as an array of characters in many different ways.
| If I have a number in a shell script, how would i access just one digit of that number? [closed] |
1,490,442,686,000 |
Could someone help me sorting numerically the IP's from a .txt file containing string with an IP associated to it.
Content in txt:
string_A=10.a.y.155
string_B=10.a.y.212
string_C=10.a.y.104
string_D=10.a.y.10
string_E=10.a.y.198
string_U=10.b.x.155
string_V=10.b.x.212
string_X=10.b.x.104
string_Y=10.b.x.10
string_Z=10.b.x.198
The output I want:
10.a.x._ series in sorted way.
string_D=10.a.y.10
string_C=10.a.y.104
string_A=10.a.y.155
string_E=10.a.y.198
string_B=10.a.y.212
I am not sure if I could post the original values due to our companies data policy. So, these dummy values instead.
|
Using -V ("version sort"), implemented by most sort:
$ sort -t '=' -k2 -V file
string_D=10.a.y.10
string_C=10.a.y.104
string_A=10.a.y.155
string_E=10.a.y.198
string_B=10.a.y.212
string_Y=10.b.x.10
string_X=10.b.x.104
string_U=10.b.x.155
string_Z=10.b.x.198
string_V=10.b.x.212
If a=15 and b=140:
$ sort -t '=' -k2 -V file
string_D=10.15.y.10
string_C=10.15.y.104
string_A=10.15.y.155
string_E=10.15.y.198
string_B=10.15.y.212
string_Y=10.140.x.10
string_X=10.140.x.104
string_U=10.140.x.155
string_Z=10.140.x.198
string_V=10.140.x.212
The -k2 with -t '=' makes sort consider the data after the = as the sort key.
| Sort IP's associated to a string in a txt file in linux |
1,490,442,686,000 |
I have a file like this:
0.0451660231
0.0451660231
0.0527343825
0.3933106065
0.3970947862
0.0489502028
0.3592529595
0.3592529595
0.3592529595
0.3630371392
0.3630371392
0.3668213189
0.4008789659
0.1397705227
and I want to divide each line by the maximum value.
I did
cut -f1 -d"," CVBR1_hist | sort -n | tail -1 > maximum
awk -v c=$maximum '{print $1/c}' CVBR1_hist > CVBR1_norm
I have this error:
awk: cmd. line:1: (FILENAME=CVBR1_hist FNR=1) fatal: division by zero attempted
I don't know how to solve it. Can anyone help me?
|
Awk give you an error because the variable "c" is set equal to an empty variable. $maximum isn't yet set.
You should do:
awk -v c=`cat maximum` '{print $1/c}' CVBR1_hist > CVBR1_norm
Here is where your command failed.
Better way: don't go through a temporary file but store the maximum value in a variable, like the Kusalananda's answer.
| Using awk to divide the number in each line of a file by the maximum value in that file |
1,490,442,686,000 |
Hi I've tried many solutions to similar questions and none have seemed to work for me. I have a text file where each line has an undefined length of numbers after the string " length_ ". How can I select out all the lines where that number is equal to or greater than 5000? This has been the cleanest looking code attempt I’ve tried so far, but it still just produces an empty file (even though file1 definitely contains lines with numbers greater than 5000)
grep --regexp="length_\"[5-9][0-9]\{3,\}\"" file1.txt > file2.txt
example info within input text file:
/file/path/xx00:>TEXT_1_length_81903_cov_10.5145_
/file/path/xx01:>TEXT_2_length_348971_cov_13.6753_
/file/path/xx02:>TEXT_3_length_4989_cov_11.9516_
/file/path/xx03:>TEXT_4_length_29811_cov_13.7948_
/file/path/xx03:>TEXT_5_length_2567_cov_13.7948_
desired example info within output text file:
/file/path/xx00:>TEXT_1_length_81903_cov_10.5145_
/file/path/xx01:>TEXT_2_length_348971_cov_13.6753_
/file/path/xx03:>NODE_4_length_29811_cov_13.7948_
|
Here's one way, using awk, to print lines from a file that contain a number after the string "length_" that is less than or equal to 5000:
awk '{sub("length_", "", $0); if ($0 <= 5000) { print "length_"$0 } }' input
It simply tells awk to strip off the "length_" string, then compare the remaining part of the line to 5000; if it's less than or equal to 5000, print "length_" and the remainder of the line. Your Q's subject line says (at the time) "greater than 5000", so if that's the actual desire, simply change the comparison in awk:
awk '{sub("length_", "", $0); if ($0 > 5000) { print "length_"$0 } }' input
Given the actual file format, the awk command can be simplified considerably:
awk -F_ '$4 > 5000' input
or
awk -F_ '$4 <= 5000' input
by telling awk to split the fields based on underscores, then comparing the fourth field to 5000. If the comparison is true, then (by default) print.
| copy every line from a text file that contains a number greater than 5000 |
1,490,442,686,000 |
What I need to do is write a shell program called avgs that would read lines from the file with data, where the title line could be at any line within the data.
I must keep a total and count for each of the last 2 columns and must not include data from the first line in the totals and counts.
This is the file with the data:
92876035 SMITZ S 15 26
95908659 CHIANG R 10 29
SID LNAME I T1/20 T2/30
92735481 BRUCE. R 16 28
93276645 YU C 17 27
91234987 MYRTH R 15 16
The shell program will write to stdout the line: "The averages are 17 and 24"
This is what I tried but it doesn't work
count_ppl=0
total=0
while read ?? ?!
do
total=$((sum+b))
count_ppl=$((count_ppl+1))
done < filename
avg=$(echo "scale=2;$total/$count_ppl" | bc)
echo "The averages are = $avg"
The "??" and "?!" are there beside the "while read" because I do not know what to put there.
I guess this probably computes one averages for one column, but how would I get the data from the columns and compute two averages.
(this is bash btw).
|
Not sure what you mean by "and must not include data from the first line in the totals and counts.". Do you mean that the row "92876035 SMITZ S 15 26" must be excluded or just not to 'sum' "SID LNAME I T1/20 T2/30"?
The ?? and ?! needs to be replaced by variable names you need. The last variable name mentioned will keep the remainder of the input. You need the last two columns so in your case there are 5 columns and the while read statement could be:
while read col1 col2 col3 col4 col5
Next you need to check if the line is the title line. In this case I will test for the word SID in the first column:
if [ "$col1" != 'SID' ]
and from here we can start calculating:
totallines=$((totallines+1))
sumcol4=$((sumcol4+col4))
sumcol5=$((sumcol5+col5))
Finally you can calculate the averages using
avgcol4=$(echo "scale=2; $sumcol4/$totallines"|bc)
avgcol5=$(echo "scale=2; $sumcol5/$totallines"|bc)
to wrap this up you can use the following script:
#!/bin/bash
while read col1 col2 col3 col4 col5
do
if [ "$col1" != 'SID' ]
then
totallines=$((totallines+1))
sumcol4=$((sumcol4+col4))
sumcol5=$((sumcol5+col5))
fi
done < /path/to/inputfile
avgcol4=$(echo "scale=2; $sumcol4/$totallines"|bc)
avgcol5=$(echo "scale=2; $sumcol5/$totallines"|bc)
printf "The averages are %s and %s" $avgcol4 $avgcol5
Another way of doing this is to use awk:
awk '{ if ( $1 != "SID" ) { COL4+=$4; COL5+=$5; } } END { LINES=NR-1; printf "The averages are %.2f and %.2f\n", COL4/LINES, COL5/LINES }' < /path/to/inputfile
The above command filters for the title row, otherwise sum column 4 and column 5, and after processing the inputfile it will set the LINES variable to the number of record substracted by 1 (the title row) and prints the output line.
Both the bash and the awk version will output:
The averages are 14.60 and 25.20
| Shell program that outputs the averages |
1,490,442,686,000 |
How to search the XML file using grep or similar for a particular tag but show only the content between opening and closing tags? Here is the exact tag I'd like to locate:
<max-diskusage>1024000000</max-diskusage>
But I would like to get just the 1024000000 part and not the tag.
That is a storage size in bytes and convert that to 1 gb IF POSSIBLE or any results converted to GB.
|
Using xmlstarlet to parse the XML document for any instance of the max-diskusage tag, extracting its value, and then using GNU numfmt to convert the number of bytes to SI units:
xmlstarlet sel -t -v '//max-diskusage' -nl file | numfmt --to=si
With the short example in the question in file, this returns the string 1.1G. Using --to=iec (to get the traditional power-of-two sizes) in place of --to=si, it returns 977M. Use --to=si --round=down to get 1.0G.
| How to display only the content between opening and closing tags in XML file? |
1,490,442,686,000 |
I have a representative dataset
35.5259 327
35.526 326
35.526 325
35.5261 324
35.5262 323
35.5263 322
35.5264 321
35.5265 320
35.5266 319
35.5268 318
# Contour 4, label:
35.5269 317
35.527 316
35.5272 315
35.5274 314
35.5276 313
35.5278 312
35.528 311
# Contour 4, label:
35.5282 310
35.5285 309
35.5287 308
35.529 307
35.5293 306
I try to find the two max values within a range in col 2 with:
awk '320>$2,$2>315 && $1>max1{max1=$1;line=$2} 313>$2,$2>307 && $1>max2{max2=$1;line2=$2} END {printf " %s\t %s\t %s\t %s\n",max1,line,max2,line2}' FILENAME
I just get blank ouput (As I have lots of blank spaces in the txt file)How to ignore that ? with $1+0 == $1 ?
I would like to find the Max values in col1 between 320 and 315 and between 313-307 in col2. The output I need:
35.5266 319 35.5278 312
How do I get the desired output ?
Thanks
|
Your code works well if you change , to &&.
But I think you have a logical error, too. Shouldn't $1>max1 rather be$2>line1 (and the same for max2/line2) ?
awk '
320>$2 && $2>315 && $2>line1 {max1=$1;line1=$2}
313>$2 && $2>307 && $2>line2 {max2=$1;line2=$2}
END {printf " %s\t %s\t %s\t %s\n",max1,line1,max2,line2}
' file
| Find max values within Column 1 range and print column2 |
1,490,442,686,000 |
I have a file containing the following list of numbers:
0.1131492
0.1231466
0.1327564
0.1017683
5.4356130
0.1360532
5.4258129
0.1433982
0.1124752
.
.
.
I would like to re-write this list of numbers if a line contains a value which is greater than 1.0000 then obtain previous line's number/value, such as:
0.1131492
0.1231466
0.1327564
0.1017683
0.1017683
0.1360532
0.1360532
0.1433982
0.1124752
.
.
.
|
awk '$0>1 { $0=NR==1?0.1:prev }{ prev=$0; print }' file
If the current line is greater 1, assign 0.1 to the current line if the line number is 1 or the previous value otherwise.
Then assign the current line to the prev variable and print the current line.
| re-write column's values to obtain previous line's value if exceed number 1 |
1,490,442,686,000 |
I have a file F.tsv with 13 columns, the last column (the 13th columns) looks like this:
2.1e-06
0.58
10
8.7e-22
0.0014
0.034
9.5
0.67
0.67
0.68
9.2
8.4e-22
9.7
I've tried sort -k 13 F.tsv but it didn't work since this didn't consider the scientific notation (like 2.1e-06).
Is there any way to sort considering the scientific notation like this:
8.4e-22
8.7e-22
1.3e-08
1.3e-08
7e-07
2.1e-06
0.0014
0.034
0.58
0.67
0.67
0.68
9.2
9.5
9.7
10
|
I get the desirewd result with:
LC_ALL=C sort -g -k 13 F.tsv
| Sort file by row and with scientific numbers |
1,490,442,686,000 |
The in file looks like this
-17.3644 0.00000000 0.00000000 ....
-17.2703 0.00000000 0.00000000 ....
-17.1761 0.00000000 0.00000000 ....
-16.5173 0.00000000 0.00000000 ....
-16.4232 0.00000000 0.00000000 ....
The desire output should be
-173.644 0.00000000 0.00000000 ....
-172.703 0.00000000 0.00000000 ....
-171.761 0.00000000 0.00000000 ....
-165.173 0.00000000 0.00000000 ....
-164.232 0.00000000 0.00000000 ....
so I want to multiply let's say 1st column by 10 but on the same time also keep the other 1000 columns. with awk '{print $1*10}' infile > outfile you only print first column, how can I also keep the other columns?
|
try
awk '{$1=$1*10 ; print }'
this will change first parameter, and print whole line.
to keep formating to 3 digit, use
awk '{$1=sprintf("%.3f",$1*10);print;}'
| multiply specific column in a file which consists of thousands columns |
1,490,442,686,000 |
How can I read, increment and replace a value in a file?
foo="val"
ver="1.2.0001"
...
Now I would like to increment the "0001" to 0002".
|
Assuming that the patch level is always going to be a string of four digits:
$ ver=1.2.0001
$ printf '%s\n' "$ver" | awk -F '.' '{ printf("%s.%s.%04d\n", $1, $2, $3 + 1) }'
1.2.0002
This uses awk and treats the version as three fields delimited by dots. It prints the first two fields as they are, but adds 1 to the third field and formats the result using %04d (a zero-filled, four digit, decimal number).
This would generate 1.2.10000 if $ver was 1.2.9999.
To store the value back into ver, use ver=$( printf ... | awk ... ).
| Replace and increment a line in bash |
1,490,442,686,000 |
I'm trying to find the way to add a zero whenever there is a single digit.
750
1.75
750
50
1
32
The output should be like this
750
1.75
750
50
1.0
32
|
Presuming the input file is one number per line, and you want to add .0 to every single-digit integer:
sed 's/^[0-9]$/&.0/' /path/to/inputfile
To replace the contents of the file rather than display the changes:
sed --in-place 's/^[0-9]$/&.0/' /path/to/inputfile
| Add '.0' to single-digit integers |
1,490,442,686,000 |
I want to halve the value of the numbers inside the field PERCENT="" which are above 10000. The numbers also can't have decimal places. For example PERCENT="50001" would need to be PERCENT="25001" or PERCENT="25000" (used 25001 in my example but doesn't really matter). All the data is also on one line and needs to stay that way. Example below
<VALUE MON_ID="10100" START_LV="1" END_LV="99" PERCENT="13" ITEM_ID="0" AMOUNT="6500" GROUPING="0" DROPSET="0" ZONELEVEL="GSP" VER="200" /><VALUE MON_ID="10100" START_LV="1" END_LV="99" PERCENT="11250" ITEM_ID="31" AMOUNT="1" GROUPING="3" DROPSET="0" ZONELEVEL="GSP" VER="200" /><VALUE MON_ID="10100" START_LV="1" END_LV="99" PERCENT="7000" ITEM_ID="165" AMOUNT="1" GROUPING="2" DROPSET="0" ZONELEVEL="GSP" VER="200" /><VALUE MON_ID="10100" START_LV="1" END_LV="99" PERCENT="47500" ITEM_ID="167" AMOUNT="1" GROUPING="2" DROPSET="0" ZONELEVEL="GSP" VER="200" /><VALUE MON_ID="10100" START_LV="1" END_LV="99" PERCENT="10" ITEM_ID="179" AMOUNT="1" GROUPING="0" DROPSET="0" ZONELEVEL="GSP" VER="200" /><VALUE MON_ID="10100" START_LV="1" END_LV="99" PERCENT="20" ITEM_ID="179" AMOUNT="1" GROUPING="0" DROPSET="0" ZONELEVEL="GSP" VER="200" /><VALUE MON_ID="10100" START_LV="1" END_LV="99" PERCENT="60" ITEM_ID="180" AMOUNT="1" GROUPING="0" DROPSET="0" ZONELEVEL="GSP" VER="200" /><VALUE MON_ID="10100" START_LV="1" END_LV="99" PERCENT="12223" ITEM_ID="180" AMOUNT="1" GROUPING="0" DROPSET="0" ZONELEVEL="GSP" VER="200" /><VALUE MON_ID="10100" START_LV="1" END_LV="99" PERCENT="50001" ITEM_ID="206" AMOUNT="1" GROUPING="2" DROPSET="0" ZONELEVEL="GSP" VER="200" /><VALUE MON_ID="10100" START_LV="1" END_LV="99" PERCENT="2000" ITEM_ID="273" AMOUNT="1" GROUPING="1" DROPSET="0" ZONELEVEL="GSP" VER="200" />
Results should look like
<VALUE MON_ID="10100" START_LV="1" END_LV="99" PERCENT="13" ITEM_ID="0" AMOUNT="6500" GROUPING="0" DROPSET="0" ZONELEVEL="GSP" VER="200" /><VALUE MON_ID="10100" START_LV="1" END_LV="99" PERCENT="5625" ITEM_ID="31" AMOUNT="1" GROUPING="3" DROPSET="0" ZONELEVEL="GSP" VER="200" /><VALUE MON_ID="10100" START_LV="1" END_LV="99" PERCENT="7000" ITEM_ID="165" AMOUNT="1" GROUPING="2" DROPSET="0" ZONELEVEL="GSP" VER="200" /><VALUE MON_ID="10100" START_LV="1" END_LV="99" PERCENT="23750" ITEM_ID="167" AMOUNT="1" GROUPING="2" DROPSET="0" ZONELEVEL="GSP" VER="200" /><VALUE MON_ID="10100" START_LV="1" END_LV="99" PERCENT="10" ITEM_ID="179" AMOUNT="1" GROUPING="0" DROPSET="0" ZONELEVEL="GSP" VER="200" /><VALUE MON_ID="10100" START_LV="1" END_LV="99" PERCENT="20" ITEM_ID="179" AMOUNT="1" GROUPING="0" DROPSET="0" ZONELEVEL="GSP" VER="200" /><VALUE MON_ID="10100" START_LV="1" END_LV="99" PERCENT="60" ITEM_ID="180" AMOUNT="1" GROUPING="0" DROPSET="0" ZONELEVEL="GSP" VER="200" /><VALUE MON_ID="10100" START_LV="1" END_LV="99" PERCENT="6112" ITEM_ID="180" AMOUNT="1" GROUPING="0" DROPSET="0" ZONELEVEL="GSP" VER="200" /><VALUE MON_ID="10100" START_LV="1" END_LV="99" PERCENT="25001" ITEM_ID="206" AMOUNT="1" GROUPING="2" DROPSET="0" ZONELEVEL="GSP" VER="200" /><VALUE MON_ID="10100" START_LV="1" END_LV="99" PERCENT="2000" ITEM_ID="273" AMOUNT="1" GROUPING="1" DROPSET="0" ZONELEVEL="GSP" VER="200" />
|
Assuming that the XML has a proper root tag, using XMLStarlet:
$ xml ed -u '//VALUE/@PERCENT[. > 10000]' -x 'floor(. * .5)' data.xml
The //VALUE bit will select any VALUE node in the XML. The /@PERCENT will select their PERCENT attribute. With [. > 10000] I restrict the selection to those PERCENT attributes whose values exceeds 10000. The edit is to apply the function floor(. * .5) to the selected values (multiply by 0.5 and round downwards) where ., as before, is the placeholder for the value.
This will give you
<?xml version="1.0"?>
<root>
<VALUE MON_ID="10100" START_LV="1" END_LV="99" PERCENT="13" ITEM_ID="0" AMOUNT="6500" GROUPING="0" DROPSET="0" ZONELEVEL="GSP" VER="200"/>
<VALUE MON_ID="10100" START_LV="1" END_LV="99" PERCENT="5625" ITEM_ID="31" AMOUNT="1" GROUPING="3" DROPSET="0" ZONELEVEL="GSP" VER="200"/>
<VALUE MON_ID="10100" START_LV="1" END_LV="99" PERCENT="7000" ITEM_ID="165" AMOUNT="1" GROUPING="2" DROPSET="0" ZONELEVEL="GSP" VER="200"/>
<VALUE MON_ID="10100" START_LV="1" END_LV="99" PERCENT="23750" ITEM_ID="167" AMOUNT="1" GROUPING="2" DROPSET="0" ZONELEVEL="GSP" VER="200"/>
<VALUE MON_ID="10100" START_LV="1" END_LV="99" PERCENT="10" ITEM_ID="179" AMOUNT="1" GROUPING="0" DROPSET="0" ZONELEVEL="GSP" VER="200"/>
<VALUE MON_ID="10100" START_LV="1" END_LV="99" PERCENT="20" ITEM_ID="179" AMOUNT="1" GROUPING="0" DROPSET="0" ZONELEVEL="GSP" VER="200"/>
<VALUE MON_ID="10100" START_LV="1" END_LV="99" PERCENT="60" ITEM_ID="180" AMOUNT="1" GROUPING="0" DROPSET="0" ZONELEVEL="GSP" VER="200"/>
<VALUE MON_ID="10100" START_LV="1" END_LV="99" PERCENT="6111" ITEM_ID="180" AMOUNT="1" GROUPING="0" DROPSET="0" ZONELEVEL="GSP" VER="200"/>
<VALUE MON_ID="10100" START_LV="1" END_LV="99" PERCENT="25000" ITEM_ID="206" AMOUNT="1" GROUPING="2" DROPSET="0" ZONELEVEL="GSP" VER="200"/>
<VALUE MON_ID="10100" START_LV="1" END_LV="99" PERCENT="2000" ITEM_ID="273" AMOUNT="1" GROUPING="1" DROPSET="0" ZONELEVEL="GSP" VER="200"/>
</root>
| halve number fields above 10000, dropping decimals |
1,490,442,686,000 |
I have a file like this.
input data
4.2394 4.4569
4.2427 4.1011
4.2879 4.1237
4.2106 4.4844
4.2373 4.1071
4.1322 4.0502
4.3103 4.4255
4.4342 4.5262
I need to multiply each element by a constant factor (in this example the factor is 8.06573) to produce an output like this:
output
34.193855762 35.948152037
34.220472671 33.078365303
34.585043667 33.260650801
33.961562738 36.169959612
34.176917729 33.126759683
33.329209506 32.667819646
34.765716019 35.694888115
35.765059966 36.507107126
|
I think this does what you want; it accepts an awk variable named "factor" that is can easily be set to whatever you want:
awk -v factor=8.06573 '{printf "%2.9f %2.9f\n", $1 * factor, $2 * factor}'
With the given input, it outputs:
34.193855762 35.948152037
34.220472671 33.078365303
34.585043667 33.260650801
33.961562738 36.169959612
34.176917729 33.126759683
33.329209506 32.667819646
34.765716019 35.694888115
35.765059966 36.507107126
| How to multiply two columns in a file by a constant number |
1,490,442,686,000 |
http://0-0.latam.corp.yahoo.com/ 6656
http://0-0.latam.corp.yahoo.com/nonEtAk 6670
http://1.avatar.yahoo.com/ 6644
http://1.avatar.yahoo.com/nonEtAk 6858
Here the first column lists urls and second column lists the response length.
"/nonEtAk" is some non existing path and "/" is the existing one.
By comparing the response from non existing path with the existing one, I want to extract urls which are not giving false positive responses.
So I figure out that this can be done by comparing the response length.
So the data consists of that.
So where the domain is same i want to compare the second column and give the output as the domain.
e.g. in above 0-0.latam.corp.yahoo.com/ is giving 6656 length and
0-0.latam.corp.yahoo.com/nonEtAk is giving 6670 length.
The difference is 14. So it is false positive.
While in case of 1.avatar.yahoo.com the difference is 200+. So it has something interesting. So if I passed the above data, I want the result as http://1.avatar.yahoo.com
|
You don't give a specific threshhold for acceptance so why not rank them by the difference
awk '{
if ($1 ~ /nonEtAk/) {ss=substr($1,1,length($1)-7); rank[ss]+=$2}
else rank[$1]-=$2
} END {
for (key in rank) { print key, "difference is", rank[key] }
}' <(sed -e '/^$/d' file) | sort -r -k4
Output
http://1.avatar.yahoo.com/ difference is 214
http://0-0.latam.corp.yahoo.com/ difference is 14
Walkthrough
Remove all the empty lines and feed it to awk
awk '' <(sed -e '/^$/d' file)
In awk
If the first field contains "nonEtAk" then get the domain name substring (by chopping off the last 7 characters) and add the value from $2 to an associative array (rank) with the domain name as the key
if ($1 ~ /nonEtAk/) {ss=substr($1,1,length($1)-7); rank[ss]+=$2}
...otherwise subtract $2 from the array element with the domain name as the key
else rank[$1]-=$2
...when finished reading the file
`} END {`
...iterate over the array and print
for (key in rank) { print key, "difference is", rank[key] }
...finally rank them in descending order based upon the difference
| sort -r -k4
| Compare two lines using second columns |
1,283,280,001,000 |
KDE SC 4.5.0 has some problems with some video cards including mine. Upon Release Arch recommended several workarounds. One of which was
export "LIBGL_ALWAYS_INDIRECT=1" before starting KDE
I decided that it was the easiest, best method. But I don't know what it does or how it impacts my system. Is it slower than the default? should I remember to keep an eye on the problem and disable it later once it's fixed?
|
Indirect rendering means that the GLX protocol will be used to transmit OpenGL commands and the X.org will do the real drawing.
Direct rendering means that application can access hardware directly without communication with X.org first via mesa.
The direct rendering is faster as it does not require change of context into X.org process.
Clarification: In both cases the rendering is done by GPU (or technically - may be done by GPU). However in indirect rendering the process looks like:
Program calls a command(s)
Command(s) is/are sent to X.org by GLX protocol
X.org calls hardware (i.e. GPU) to draw
In direct rendering
Program calls a command(s)
Command(s) is/are sent to GPU
Please note that because OpenGL was designed in such way that may operate over network the indirect rendering is faster then would be naive implementation of architecture i.e. allows to send a buch of commands in one go. However there is some overhead in terms of CPU time spent for context switches and handling protocol.
| What does LIBGL_ALWAYS_INDIRECT=1 actually do? |
1,283,280,001,000 |
I want to create a dummy, virtual output on my Xorg server on current Intel iGPU (on Ubuntu 16.04.2 HWE, with Xorg server version 1.18.4). It is the similiar to Linux Mint 18.2, which one of the xrandr output shows the following:
Screen 0: minimum 8 x 8, current 1920 x 1080, maximum 32767 x 32767
...
eDP1 connected primary 1920x1080+0+0 (normal left inverted right x axis y axis) 0mm x 0mm
...
VIRTUAL1 disconnected (normal left inverted right x axis y axis)
...
In the Linux Mint 18.2, I can turn off the built-in display (eDP1) and turn on the VIRTUAL1 display with any arbitrary mode supported by the X server, attach x11vnc to my main display and I'll get a GPU accelerated remote desktop.
But in Ubuntu 16.04.2, that's not the case. The VIRTUAL* display doesn't exist at all from xrandr. Also, FYI, xrandr's output names is a little bit different on Ubuntu 16.04.2, where every number is prefixed with a -. E.g. eDP1 in Linux Mint becomes eDP-1 in Ubuntu, HDMI1 becomes HDMI-1, and so on.
So, how to add the virtual output in Xorg/xrandr?
And how come Linux Mint 18.2 and Ubuntu 16.04.2 (which I believe uses the exact same Xorg server, since LM 18.2 is based on Ubuntu, right?) can have a very different xrandr configurations?
Using xserver-xorg-video-dummy is not an option, because the virtual output won't be accelerated by GPU.
|
Create a 20-intel.conf file:
sudo vi /usr/share/X11/xorg.conf.d/20-intel.conf
Add the following configuration information into the file:
Section "Device"
Identifier "intelgpu0"
Driver "intel"
Option "VirtualHeads" "2"
EndSection
This tells the Intel GPU to create 2 virtual displays. You can change the number of VirtualHeads to your needs.
Then logout and login. You should see VIRTUAL1 and VIRTUAL2 when you run xrandr.
Note if you were using the modesetting driver previously (which is the modern default) switching to the intel driver will cause the names of displays to change from, eg, HDMI-1 or DP-1 to HDMI1 or DP1.
| Add VIRTUAL output to Xorg |
1,283,280,001,000 |
I'm interested in forwarding an X11 session over SSH, in order to launch a remote process that utilizes OpenGL (specifically, gazebo for anyone familiar.)
The problem that I seem to be running into is that gazebo crashes due to a mismatch in the graphics cards; it can't find "NV-GLX" extensions. The exact error output:
Xlib: extension "NV-GLX" missing on display "localhost:10.0".
Xlib: extension "NV-GLX" missing on display "localhost:10.0".
X Error of failed request: GLXUnsupportedPrivateRequest
Major opcode of failed request: 149 (GLX)
Minor opcode of failed request: 16 (X_GLXVendorPrivate)
Serial number of failed request: 24
Current serial number in output stream: 25
The remote machine is running with an NVIDIA card, and my local machine is using an AMD card.
I've tested X11 forwarding of gazebo between two machines with NVIDIA cards. It works just fine.
As near as I can tell, it seems that one of three things are happening:
I'm doing something wrong,
What I want to do is impossible,
Gazebo doesn't build in an agnostic manner with branching codepaths for different hardware; whatever your system looks like when it builds is what you get.
The remote machine is running Ubuntu and my local machine is a Mac running 10.8.2; I already know that I have x11 forwarding set up properly for normal use as I can get things like xclock to open up in XQuartz just fine. The solution (if it exists) would also preferably work for other OS's including Windows over WinSCP.
|
A few notes from the GLX Wikipedia article:
GLX [is] An extension of the X protocol, which allows the client (the OpenGL application) to send 3D rendering commands to the X server (the software responsible for the display). The client and server software may run on different computers.
and
If client and server are running on the same computer and an accelerated 3D graphics card using a suitable driver is available, the former two components can be bypassed by DRI. In this case, the client application is then allowed to directly access the video hardware through several API layers.
I believe the fist point answers your question about whether this is possible or not: it should certainly be possible. The second may provide an explanation for why your client program insists on using features of its local X server (the NV GLX driver) -- perhaps it thinks that localhost:10.0 is the same computer, and so attempted a direction connection.
Things to try:
Instead of gazebo, try glxdemo.
If possible, get the two computers on the same network, and take ssh out of the picture
The big gun: strace your gazebo invocation, and figure out why it's loading nv-glx
Good luck!
| X11 forwarding an OpenGL application from a machine running an NVIDIA card to a machine with an AMD card |
1,283,280,001,000 |
xvfb is supposed to let me run X programs in a headless environment. But when I run xvfb-run glxgears, I get:
libGL error: failed to load driver: swrast
libGL error: Try again with LIBGL_DEBUG=verbose for more details.
Error: couldn't get an RGB, Double-buffered visual
When I run LIBGL_DEBUG=verbose xvfb-run glxgears, I get:
libGL: OpenDriver: trying /usr/lib/x86_64-linux-gnu/dri/tls/swrast_dri.so
libGL: OpenDriver: trying /usr/lib/x86_64-linux-gnu/dri/swrast_dri.so
libGL error: failed to load driver: swrast
Error: couldn't get an RGB, Double-buffered visual
I'm running stock Lubuntu 13.10 x64 with Intel Ivy Bridge integrated graphics. libgl1-mesa-dri is installed and /usr/lib/x86_64-linux-gnu/dri/swrast_dri.so exists. Running as root doesn't help.
What's going wrong?
|
Just if anyone finds this old question, there is a solution to that mentioned in a bug report linked from another unix.stackexchange question. It was enough to change the default server parameters (-s/--server-args) from -screen 0 640x480x8 to -screen 0 640x480x24, i.e. anything with the 24 color depth.
| Why does `xvfb-run glxgears` fail with an swrast error? |
1,283,280,001,000 |
I have one weak PC (client) but with acceptable 3D performance, and one strong PC (server) which should be capable of running an application using OpenGL twice, i.e. once locally and once remotely for the client. Currently, I ssh -X into it, but the client's console output states software rendering is used and I only get 3 frames per second (fps). Actually, ssh's encryption is not necessary since this is on a LAN, but it's what I already know for remote applications...
So, how can the client performance be increased? My ideas are
use hardware acceleration, but the server's or the client's one and how?
use something different than ssh
I know, in full resolution and without sophisticated compression a 100 Mbit/s LAN won't make more fps, but it's a windowed application of ca. 800x450, so theoretically up to 12 fps (at 24 bits/pixel) should be possible using uncompressed graphical data. And maybe something better is possible using the client's own GPU or some smart compression.
--
edit Turns out what I want is basically a local version of what e.g. onlive and gaikai offers. Is there something like this for Linux (and possibly free)?
--
edit2 VirtualGL looks like the best solution (though currently not working for me), but I wonder if it is possible to do hardware rendering on the client, too
|
You could check out VirtualGL together with TurboVNC should provide you with 20fps @ 1280x1024 on 100 Mbit (see wikipedia).
Do note that it might not work with all applications, it depends on how they use OpenGL.
| How to efficiently use 3D via a remote connection? |
1,283,280,001,000 |
I want to try the most basic OpenGL driver, in order to find out what's the problem of my X server with OpenGL.
I want then to have X use software rendering for OpenGL, like windows do with opengl.dll with no driver installed.
How can I do that? Didn't find anything when searching for X OpenGL software rendering. I'll be glad for a reference, and for the keywords I had to use in order to find out how to do that.
I'm using Xorg in RHEL 5.3.
|
Duplicating my answer Force software based opengl rendering - Super User:
sudo apt-get install libgl1-mesa-swx11
will remove the libgl1-mesa-glx hardware-accelerated Mesa libraries and install the software-only renderer.
Alternately, you can set LIBGL_ALWAYS_SOFTWARE=1, which will only affect programs started with that environment variable, not the entire system.
Fedora doesn't package the swrast DRI backend separately from mesa-dri-drivers (and I assume the same is the case in RHEL), so the first isn't an option, but the latter is.
| Using software OpenGL rendering with X |
1,283,280,001,000 |
This question is about executing /usr/bin/Xorg directly on Ubuntu 14.04.
And I know there exists Xdummy, but I couldn't make the dummy driver work properly with the nvidia GPU so it's not an option.
I copied the system-wide xorg.conf and /usr/lib/xorg/modules, and modified them a little bit. (Specified ModulePath in my xorg.conf too)
Running the following command as root works fine:
Xorg -noreset +extension GLX +extension RANDR +extension RENDER -logfile ./16.log -config ./xorg.conf :16
But if I do that as a non-root user (the log file permission is OK), this error occurs:
(EE)
Fatal server error:
(EE) xf86OpenConsole: Cannot open virtual console 9 (Permission denied)
(EE)
(EE)
Please consult the The X.Org Foundation support
at http://wiki.x.org
for help.
(EE) Please also check the log file at "./16.log" for additional information.
(EE)
(EE) Server terminated with error (1). Closing log file.
Could you please help me to run Xorg without sudo??
|
To determine who is allowed to run X configure it with
dpkg-reconfigure x11-common
There are three options: root only, console users only, or anybody. The entry is located in /etc/X11/Xwrapper.config.
Since Debian 9 and Ubuntu 16.04 this file does not exist. After installing xserver-xorg-legacy, the file reappears and its content has to be changed from:
allowed_users=console
to:
allowed_users=anybody
needs_root_rights=yes
You also need to specify the virtual terminal to use when starting X, otherwise, errors may occur. For example:
Xorg :8 vt8
| How can I run /usr/bin/Xorg without sudo? |
1,283,280,001,000 |
(Follow-up on How to efficiently use 3D via a remote connection?)
I installed the amd64 package on the server and the i386 one on the client. Following the user's guide I run this on the client:
me@client> /opt/VirtualGL/bin/vglconnect me@server
me@server> /opt/VirtualGL/bin/vglrun glxgears
This causes a segfault, using vglconnect -s for a ssh tunnel doesn't work either. I also tried the TurboVNC method, where starting vglrun glxgears works, but I'd prefer transmitting only the application window using the jpeg compression. Is the problem 32 <-> 64 bit? Or how can I fix things?
|
I don't know how this remote 3D works but if the client is indeed trying to run the amd64 executable, this is definitely the reason this message appears.
| Segmentation fault when trying to run glxgears via virtualGL |
1,283,280,001,000 |
How can I tell what OpenGL versions are supported on my (Arch Linux) machine?
|
$ sudo pacman -S mesa-demos
$ glxinfo | grep "OpenGL version"
| How can I tell what version of OpenGL my machine supports on Arch Linux? |
1,283,280,001,000 |
How can I turn off Hardware Acceleration in Linux, also known as Direct Rendering. I wish to turn this off, as it messes with some applications like OBS Studio which can't handle capturing of hardware acceleration on other applications since it's enabled for the entire system. Certain apps can turn it on and off, but can't do this for desktop and other apps.
When adding a source to capture from in OBS it just shows a blank capture image, for example if I wanted to record my desktop, it'll just show it as a blank capture input. Doesn't work if I want to capture web browser like Google Chrome, unless it's a single window with no tabs, and hardware acceleration is turned off in it's settings.
Graphics: Card-1: Intel 3rd Gen Core processor Graphics Controller bus-ID: 00:02.0
Card-2: NVIDIA GF108M [GeForce GT 630M] bus-ID: 01:00.0
Display Server: X.Org 1.15.1 driver: nvidia Resolution: [email protected]
GLX Renderer: GeForce GT 630M/PCIe/SSE2 GLX Version: 4.5.0 NVIDIA 384.90 Direct Rendering: Yes
|
You can configure Xorg to disable OpenGL / GLX.
For a first try, you can run a second X session: switch to tty2, log in and type:
startx -- :2 vt2 -extension GLX
To permanently disable hardware acceleration, create a file:
/etc/X11/xorg.conf.d/disable-gpu.conf
with the the content:
Section "Extensions"
Option "GLX" "Disable"
EndSection
Note that Xwayland in Wayland compositors like Gnome3-Wayland will ignore settings in xorg.conf.d.
| How to disable Hardware Acceleration in Linux? |
1,283,280,001,000 |
I am trying to run an OpenGL 2.1+ application over SSH.
[my computer] --- ssh connection --- [remote machine] (application)
I use X forwarding to run this application and with that in mind I think there are a couple of ways for this application to do 3D graphics:
Using LIBGL_ALWAYS_INDIRECT, the graphics hardware on my computer can be used. According to this post this is generally limited to OpenGL version 1.4.
Using Mesa software rendering on the remote machine. This supports higher versions of OpenGL, but uses the CPU.
However, in my case the remote machine has a decent graphics card. So rather than software rendering, I was wondering if it could do hardware rendering remotely instead.
Also, if there is another way to use my machine's graphics card that would be great too.
|
The choice is not necessarily between indirect rendering and software rendering, but more precisely between direct and indirect rendering. Direct rendering will be done on the X client (the remote machine), then rendering results will be transferred to X server for display. Indirect rendering will transmit GL commands to the X server, where those commands will be rendered using server's hardware. Since you want to use the 3D hardware on the remote machine, you should go with direct rendering (and accept overhead of transmitting rendered raster image over the network).
If you application cannot live with OpenGL 1.4, direct rendering is your only option.
| Remote direct rendering for GLX (OpenGL) |
1,283,280,001,000 |
I'm trying to compile a demo project, what is using OpenGL.
I'm getting this error message:
But I have everything:
What is happening?
If I have all of the dependencies, why does it not compile?
I use Solus 3.
|
The meaning of -lglut32 (as an example) is, load the library glut32.
The result of the ls you execute showed that you have the header file for glut32
In order to solve the problem of cannot find -l-library-name
You need:
To actually have the library in your computer
Help gcc/the linker to find the library by providing the path to the library
You can add -Ldir-name to the gcc command
You can the library location to the LD_LIBRARY_PATH environment variable
Update the "Dynamic Linker":
sudo ldconfig
man gcc
-llibrary
-l library
Search the library named library when linking.
-Ldir
Add directory dir to the list of directories to be searched for -l.
| gcc /usr/bin/ld: cannot find -lglut32, -lopengl32, -lglu32, -lfreegut, but these are installed |
1,283,280,001,000 |
Today i did update and glx stopped working for non-root users:
$ glxinfo
name of display: :0
X Error of failed request: BadValue (integer parameter out of range for operation)
Major opcode of failed request: 154 (GLX)
Minor opcode of failed request: 24 (X_GLXCreateNewContext)
Value in failed request: 0x0
Serial number of failed request: 81
Current serial number in output stream: 82
but when i run it as root, all is good:
$ sudo glxinfo
name of display: :0
display: :0 screen: 0
direct rendering: Yes
server glx vendor string: NVIDIA Corporation
server glx version string: 1.4
server glx extensions:
GLX_ARB_create_context, GLX_ARB_create_context_profile,
...
OpenGL vendor string: NVIDIA Corporation
OpenGL renderer string: GeForce GT 430/PCIe/SSE2
OpenGL core profile version string: 4.2.0 NVIDIA 304.132
OpenGL core profile shading language version string: 4.20 NVIDIA via Cg compiler
...
ubuntu 14.04. but same problem was reported for opensuse here
$ uname -a
Linux xxx 4.4.0-45-generic #66~14.04.1-Ubuntu SMP Wed Oct 19 15:05:38 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
the only non commented line of /etc/X11/Xwrapper.config is
allowed_users=console
and there is no group or user with that name.
nvidia devices permissions:
$ ls -l /dev/nvid*
crw-rw-rw- 1 root root 195, 0 lis 5 00:24 /dev/nvidia0
crw-rw-rw- 1 root root 195, 255 lis 5 00:24 /dev/nvidiactl
|
this one solved the problem for me:
Apparently the only solution at the moment is to downgrade to the
previous driver version (304.131).
You can find the 304.131 drivers for Ubuntu this way:
go to https://launchpad.net/ubuntu/+source/nvidia-graphics-drivers-304/+publishinghistory
look for the version you need, and click on the version number
on the next page, look under "Builds" for your Ubuntu release, then click on the amd64 or i386 link behind the release name
on the next page, look under "Built files" for the .deb file starting with "nvidia-304_304.131". Download that file.
open a terminal and run sudo dpkg -i *path-to-downloaded-.deb-file* to install the downloaded version
later just lock the package version: sudo apt-mark hold nvidia-304. when nvidia fixes its driver, don't forget to unlock the package: sudo apt-mark unhold nvidia-304
| After update GLX works only for root (nvidia) |
1,283,280,001,000 |
I have a simple program written in Go that opens a blank window using GLFW. I'd like to automatically run tests, therefore I have been trying to get GLFW running in docker. So far I've already managed to get xvfb running.
My problem is that I get an error GLX extension not found when calling glfwInit. Also, running glxinfo yields this error couldn't find RGB GLX visual or fbconfig. From what I can find online this is because GLFW cannot find the GPU (because there is none).
Is there still a library I need to install or can I configure something differently (e.g. running GLFW in headless mode) to prevent this error?
Here is a shortened version of my Dockerfile (removed Go specific stuff):
FROM alpine:latest
RUN apk --no-cache add ca-certificates wget
RUN wget -q -O /etc/apk/keys/sgerrand.rsa.pub https://alpine-pkgs.sgerrand.com/sgerrand.rsa.pub
RUN wget https://github.com/sgerrand/alpine-pkg-glibc/releases/download/2.28-r0/glibc-2.28-r0.apk
RUN apk add glibc-2.28-r0.apk
RUN apk update
RUN apk add gcc
RUN apk add mesa-dev
RUN apk add libx11-dev
RUN apk add libc-dev
RUN apk add libx11-dev
RUN apk add libxcursor-dev
RUN apk add libxi-dev
RUN apk add libxinerama-dev
RUN apk add libxrandr-dev
RUN apk add xorg-server
RUN apk add xvfb
RUN apk add coreutils
RUN apk add mesa
RUN apk add mesa-gl
RUN apk add mesa-demos
RUN apk add xvfb-run --update-cache --repository http://dl-3.alpinelinux.org/alpine/edge/main/ --allow-untrusted
RUN apk add mesa-osmesa
#if you are unfamiliar with docker, this is the command that gets run when starting the container.
ENTRYPOINT xvfb-run -e /dev/stderr --server-args=':99 -screen 0 640x480x8 +extension GLX +render -noreset -ac' glxinfo | cat
Output:
The XKEYBOARD keymap compiler (xkbcomp) reports:
> Warning: Unsupported high keycode 372 for name <I372> ignored
> X11 cannot support keycodes above 255.
> This warning only shows for the first high keycode.
Errors from xkbcomp are not fatal to the X server
name of display: :99
Error: couldn't find RGB GLX visual or fbconfig
I think the XKEYBOARD warning is not important and can probably be ignored.
|
It turns out the mesa-dri-gallium package was missing to enable the GLX extension.
The finished Dockerfile looks like this:
FROM alpine:edge
RUN apk update
# Dependencies for GLFW (not required for this example)
RUN apk add \
build-base \
libx11-dev \
libxcursor-dev \
libxrandr-dev \
libxinerama-dev \
libxi-dev \
mesa-dev
# Required to run xvfb-run
RUN apk add mesa-dri-gallium xvfb-run
# virtualgl includes glxinfo
RUN apk add virtualgl --update-cache --repository http://dl-3.alpinelinux.org/alpine/edge/testing/ --allow-untrusted
ENTRYPOINT xvfb-run -e /dev/stderr glxinfo | cat
| Running GLFW in Docker |
1,283,280,001,000 |
I have bought new PC with AMD Ryzen 7 3800X and Radeon RX 5700 XT.
Can only install from UEFI USB stick otherwise installation after first dialog (the one where you select Install or Graphical Install) is just broken mess.
When installed, OpenGL uses CPU rendering instead of GPU and no drivers can do anything with it.
I have tried many things from xserver amdgpu package to official amdgpu pro but none of them works (amdgpu pro cannot be installed because it breaks during configuration).
After OS installation I do not get KDE to load even tho it is installed. I need at least some firmware to startx but having working xserver breaks TTY (full-black screen but I can successfully login and then run startx as if nothing bad was happening).
This could be OK-ish (not good but can hopefully live with) but I cannot get OpenGL to run on GPU.
Still, it glxgears run on more than 2 thousands compared to 60 on my old laptop.
And in addition I never got my 2nd monitor working (main using DisplayPort, secondary using HDMI) even tho both my monitors are connected to the GPU.
What I did not try was getting everything from source and compile it myself (it is 4:30 AM here now and I am tired of spending most of my day on it).
PS: Running Debian Sid with 5.3+ Kernel 19.2+ Mesa, amd64 (+i386 using multiarch).
PPS: Had to use other mirror than deb.debian.org because there was something broken which caused much more problems.
|
Solved by installing firmware-linux-nonfree and then running make install in git.kernel.org/.../linux/kernel/.../linux-firmware.git/ (just download it, extract the .tar.gz and while in the directory type "make install").
This way you should be able to use all the new firmware until it is replaced by newer version from your distro.
| AMD Radeon RX 5700 XT |
1,283,280,001,000 |
I am attempting to figure out if a problem with fonts in Wine is related to my OpenGL driver (r600g). How can I temporarily switch to the llvmpipe software renderer to perform testing, then switch back to r600g? I am using Kubuntu 16.04 with Radeon HD 3200 graphics. Thanks.
|
You can use the LIBGL_ALWAYS_SOFTWARE environment variable to force software rendering on a per-application basis:
LIBGL_ALWAYS_SOFTWARE=1 [application] [arguments ...]
It only works if you are using mesa (which you probably are).
| How to temporarily switch to llvmpipe |
1,283,280,001,000 |
I'm building an appliance/kiosk-type machine which is going to run a single fullscreen Wine application (Synthesia). I'm using Arch Linux running LXDE on an original 7-inch EeePC (well, souped up to 2Gb of RAM, but the CPU is rather slow, something like 633 Mhz).
The game can use either a DirectX or OpenGL renderer and I'm finding it to be quite choppy, especially with the DirectX renderer. However, I remember that the machine was perfectly capable of running Tuxracer and other OpenGL games, and Synthesia definitely should be less demanding to graphics - all it does is drawing a few colored bars.
So, the point is - the display is choppy and the CPU utilization is at 100% when the program runs so I suspect it may be using software rendering.
The video chip is Intel and I have xf86-video-intel installed.
How do I check if the application uses hardware or software rendering? If the software rendering is being used, how do I set it to hardware rendering?
|
Well, since nobody wants to answer :)
This wiki article, while not completely related, provided useful pointers:
You can easily check if you have 3D rendering ...
by installing mesa and running the following command:
glxinfo | grep renderer
If you have no 3D acceleration you'll get some output like this:
[joe@arch64]$ OpenGL renderer string: Software Rasterizer
If 3D acceleration is enabled you'll get a message like this:
[joe@arch64]$ OpenGL renderer string: Mesa DRI R600 (RV730 9490) 20090101 x86/MMX+/3DNow!+/SSE2 TCL DRI2
Also I had to install xf86-video-intel, libgl, intel-dri, mesa and mesa-demos, and added i915 to the MODULES line in /etc/mkinitcpio.conf as described here.
Everything works perfectly now. Phew...
| How do I tell if a Wine application uses hardware or software rendering? |
1,283,280,001,000 |
I have a rendering software, to be more specific, it's a Unity3D «game» that renders video (saving rendered frames).
Unfortunately Unity3D doesn't support «headless» rendering (it can run in headless mode, but in this case it doesn't render frames), so it needs an X server to create a window.
I have a Debian Bullseye server with ~~Intel GPU (630)~~ NVidia GT1030 with proprietary driver
I don't have any kind of display
I can't plug anything like HDMI fake display device.
It's performance-critical, so it must be rendered fully hardware-accelerated, so solutions like xvfb are not suitable.
And I also want to run it in Docker, and sometimes I need to see what's rendering right now with VNC for debugging purposes.
As I understand it, I need to:
Run an X server on the host machine, creating a virtual display
Share host's X server with a docker container, run my app and a VNC server there
Is it the best way to do that?
I've created a virtual display:
Section "ServerLayout"
Identifier "Layout0"
Screen 0 "Screen0"
InputDevice "Keyboard0" "CoreKeyboard"
InputDevice "Mouse0" "CorePointer"
EndSection
Section "Files"
EndSection
Section "InputDevice"
# generated from default
Identifier "Mouse0"
Driver "mouse"
Option "Protocol" "auto"
Option "Device" "/dev/psaux"
Option "Emulate3Buttons" "no"
Option "ZAxisMapping" "4 5"
EndSection
Section "InputDevice"
# generated from default
Identifier "Keyboard0"
Driver "kbd"
EndSection
Section "Monitor"
Identifier "Monitor0"
VendorName "Unknown"
ModelName "Unknown"
HorizSync 20.0 - 120.0
VertRefresh 30.0 - 120.0
EndSection
Section "Device"
Identifier "Device0"
Driver "nvidia"
VendorName "NVIDIA Corporation"
BoardName "NVIDIA GeForce GT 1030"
Option "ConnectedMonitor" "DFP"
Option "CustomEDID" "DFP-0:/etc/X11/EDID.bin"
Option "ConstrainCursor" "off"
BusID "PCI:01:0:0"
EndSection
Section "Screen"
Identifier "Screen0"
Device "Device0"
Monitor "Monitor0"
DefaultDepth 24
Option "TwinView" "0"
Option "metamodes" "DFP-0: 1280x1024 +0+0"
SubSection "Display"
Depth 24
EndSubSection
EndSection
And started X:
sudo X :0 -config /etc/X11/xorg.conf
It starts without any errors, but seems hung (doesn't react to Ctrl+C, and the only way to kill it is kill -9 PID).
glxinfo doesn't work:
$ DISPLAY=:0 glxinfo
name of display: :0
X Error of failed request: BadValue (integer parameter out of range for operation)
Major opcode of failed request: 151 (GLX)
Minor opcode of failed request: 24 (X_GLXCreateNewContext)
Value in failed request: 0x0
Serial number of failed request: 110
Current serial number in output stream: 111
However, if I specify the display, xrandr shows its info:
$ xrandr -d :0
Screen 0: minimum 8 x 8, current 1280 x 1024, maximum 32767 x 32767
DVI-D-0 connected primary 1280x1024+0+0 (normal left inverted right x axis y axis) 510mm x 290mm
1920x1080 60.00 + 59.94 50.00 60.00 50.04
1680x1050 59.95
1440x900 59.89
1280x1024 75.02* 60.02
1280x960 60.00
1280x720 60.00 59.94 50.00
1024x768 75.03 70.07 60.00
800x600 75.00 72.19 60.32 56.25
720x576 50.00
720x480 59.94
640x480 75.00 72.81 59.94 59.93
HDMI-0 disconnected (normal left inverted right x axis y axis)
X server log seems fine:
[ 306.770]
X.Org X Server 1.20.11
X Protocol Version 11, Revision 0
[ 306.770] Build Operating System: linux Debian
[ 306.770] Current Operating System: Linux home-server 5.10.0-9-amd64 #1 SMP Debian 5.10.70-1 (2021-09-30) x86_64
[ 306.770] Kernel command line: BOOT_IMAGE=/vmlinuz-5.10.0-9-amd64 root=/dev/mapper/home--server--vg-root ro quiet
[ 306.770] Build Date: 13 April 2021 04:07:31PM
[ 306.770] xorg-server 2:1.20.11-1 (https://www.debian.org/support)
[ 306.770] Current version of pixman: 0.40.0
[ 306.770] Before reporting problems, check http://wiki.x.org
to make sure that you have the latest version.
[ 306.770] Markers: (--) probed, (**) from config file, (==) default setting,
(++) from command line, (!!) notice, (II) informational,
(WW) warning, (EE) error, (NI) not implemented, (??) unknown.
[ 306.770] (==) Log file: "/var/log/Xorg.0.log", Time: Thu Nov 18 21:49:50 2021
[ 306.770] (++) Using config file: "/etc/X11/xorg.conf"
[ 306.770] (==) ServerLayout "Layout0"
[ 306.770] (**) |-->Screen "Screen0" (0)
[ 306.770] (**) | |-->Monitor "Monitor0"
[ 306.770] (**) | |-->Device "Device0"
[ 306.770] (**) |-->Input Device "Keyboard0"
[ 306.770] (**) |-->Input Device "Mouse0"
[ 306.770] (==) Automatically adding devices
[ 306.770] (==) Automatically enabling devices
[ 306.770] (==) Automatically adding GPU devices
[ 306.770] (==) Max clients allowed: 256, resource mask: 0x1fffff
[ 306.770] (WW) The directory "/usr/share/fonts/X11/cyrillic" does not exist.
[ 306.770] Entry deleted from font path.
[ 306.770] (WW) The directory "/usr/share/fonts/X11/100dpi/" does not exist.
[ 306.770] Entry deleted from font path.
[ 306.770] (WW) The directory "/usr/share/fonts/X11/75dpi/" does not exist.
[ 306.770] Entry deleted from font path.
[ 306.770] (WW) The directory "/usr/share/fonts/X11/Type1" does not exist.
[ 306.770] Entry deleted from font path.
[ 306.770] (WW) The directory "/usr/share/fonts/X11/100dpi" does not exist.
[ 306.770] Entry deleted from font path.
[ 306.770] (WW) The directory "/usr/share/fonts/X11/75dpi" does not exist.
[ 306.770] Entry deleted from font path.
[ 306.770] (==) FontPath set to:
/usr/share/fonts/X11/misc,
built-ins
[ 306.770] (==) ModulePath set to "/usr/lib/xorg/modules"
[ 306.770] (WW) Hotplugging is on, devices using drivers 'kbd', 'mouse' or 'vmmouse' will be disabled.
[ 306.770] (WW) Disabling Keyboard0
[ 306.770] (WW) Disabling Mouse0
[ 306.770] (II) Loader magic: 0x562334c16e40
[ 306.770] (II) Module ABI versions:
[ 306.770] X.Org ANSI C Emulation: 0.4
[ 306.770] X.Org Video Driver: 24.1
[ 306.770] X.Org XInput driver : 24.1
[ 306.770] X.Org Server Extension : 10.0
[ 306.771] (--) using VT number 3
[ 306.771] (II) systemd-logind: logind integration requires -keeptty and -keeptty was not provided, disabling logind integration
[ 306.771] (II) xfree86: Adding drm device (/dev/dri/card0)
[ 306.772] (--) PCI:*(1@0:0:0) 10de:1d01:1043:85f4 rev 161, Mem @ 0xa2000000/16777216, 0x90000000/268435456, 0xa0000000/33554432, I/O @ 0x00003000/128, BIOS @ 0x????????/131072
[ 306.772] (II) LoadModule: "glx"
[ 306.772] (II) Loading /usr/lib/xorg/modules/extensions/libglx.so
[ 306.772] (II) Module glx: vendor="X.Org Foundation"
[ 306.772] compiled for 1.20.11, module version = 1.0.0
[ 306.772] ABI class: X.Org Server Extension, version 10.0
[ 306.772] (II) LoadModule: "nvidia"
[ 306.772] (II) Loading /usr/lib/xorg/modules/drivers/nvidia_drv.so
[ 306.773] (II) Module nvidia: vendor="NVIDIA Corporation"
[ 306.773] compiled for 1.6.99.901, module version = 1.0.0
[ 306.773] Module class: X.Org Video Driver
[ 306.773] (II) NVIDIA dlloader X Driver 470.86 Tue Oct 26 21:53:29 UTC 2021
[ 306.773] (II) NVIDIA Unified Driver for all Supported NVIDIA GPUs
[ 306.773] (II) Loading sub module "fb"
[ 306.773] (II) LoadModule: "fb"
[ 306.773] (II) Loading /usr/lib/xorg/modules/libfb.so
[ 306.773] (II) Module fb: vendor="X.Org Foundation"
[ 306.773] compiled for 1.20.11, module version = 1.0.0
[ 306.773] ABI class: X.Org ANSI C Emulation, version 0.4
[ 306.773] (II) Loading sub module "wfb"
[ 306.773] (II) LoadModule: "wfb"
[ 306.773] (II) Loading /usr/lib/xorg/modules/libwfb.so
[ 306.773] (II) Module wfb: vendor="X.Org Foundation"
[ 306.773] compiled for 1.20.11, module version = 1.0.0
[ 306.773] ABI class: X.Org ANSI C Emulation, version 0.4
[ 306.773] (II) Loading sub module "ramdac"
[ 306.773] (II) LoadModule: "ramdac"
[ 306.773] (II) Module "ramdac" already built-in
[ 306.773] (**) NVIDIA(0): Depth 24, (--) framebuffer bpp 32
[ 306.773] (==) NVIDIA(0): RGB weight 888
[ 306.773] (==) NVIDIA(0): Default visual is TrueColor
[ 306.773] (==) NVIDIA(0): Using gamma correction (1.0, 1.0, 1.0)
[ 306.773] (**) NVIDIA(0): Option "ConstrainCursor" "off"
[ 306.773] (**) NVIDIA(0): Option "ConnectedMonitor" "DFP"
[ 306.773] (**) NVIDIA(0): Option "CustomEDID" "DFP-0:/etc/X11/EDID.bin"
[ 306.773] (**) NVIDIA(0): Option "MetaModes" "DFP-0: 1280x1024 +0+0"
[ 306.773] (**) NVIDIA(0): Enabling 2D acceleration
[ 306.773] (**) NVIDIA(0): ConnectedMonitor string: "DFP"
[ 306.773] (II) Loading sub module "glxserver_nvidia"
[ 306.773] (II) LoadModule: "glxserver_nvidia"
[ 306.773] (II) Loading /usr/lib/xorg/modules/extensions/libglxserver_nvidia.so
[ 306.777] (II) Module glxserver_nvidia: vendor="NVIDIA Corporation"
[ 306.777] compiled for 1.6.99.901, module version = 1.0.0
[ 306.777] Module class: X.Org Server Extension
[ 306.777] (II) NVIDIA GLX Module 470.86 Tue Oct 26 21:51:04 UTC 2021
[ 306.777] (II) NVIDIA: The X server supports PRIME Render Offload.
[ 306.953] (--) NVIDIA(0): Valid display device(s) on GPU-0 at PCI:1:0:0
[ 306.953] (--) NVIDIA(0): DFP-0
[ 306.953] (--) NVIDIA(0): DFP-1
[ 306.953] (**) NVIDIA(0): Using ConnectedMonitor string "DFP-0".
[ 306.953] (II) NVIDIA(0): NVIDIA GPU NVIDIA GeForce GT 1030 (GP108-A) at PCI:1:0:0
[ 306.953] (II) NVIDIA(0): (GPU-0)
[ 306.953] (--) NVIDIA(0): Memory: 2097152 kBytes
[ 306.953] (--) NVIDIA(0): VideoBIOS: 86.08.0c.00.1a
[ 306.953] (II) NVIDIA(0): Detected PCI Express Link width: 4X
[ 306.954] (--) NVIDIA(GPU-0): AOC 2369M (DFP-0): connected
[ 306.954] (--) NVIDIA(GPU-0): AOC 2369M (DFP-0): Internal TMDS
[ 306.954] (--) NVIDIA(GPU-0): AOC 2369M (DFP-0): 600.0 MHz maximum pixel clock
[ 306.954] (--) NVIDIA(GPU-0):
[ 306.954] (--) NVIDIA(GPU-0): DFP-1: disconnected
[ 306.954] (--) NVIDIA(GPU-0): DFP-1: Internal TMDS
[ 306.954] (--) NVIDIA(GPU-0): DFP-1: 165.0 MHz maximum pixel clock
[ 306.954] (--) NVIDIA(GPU-0):
[ 306.958] (II) NVIDIA(0): Validated MetaModes:
[ 306.958] (II) NVIDIA(0): "DFP-0:1280x1024+0+0"
[ 306.958] (II) NVIDIA(0): Virtual screen size determined to be 1280 x 1024
[ 306.961] (--) NVIDIA(0): DPI set to (63, 89); computed from "UseEdidDpi" X config
[ 306.961] (--) NVIDIA(0): option
[ 306.961] (II) NVIDIA: Reserving 24576.00 MB of virtual memory for indirect memory
[ 306.961] (II) NVIDIA: access.
[ 306.963] (II) NVIDIA(0): ACPI: failed to connect to the ACPI event daemon; the daemon
[ 306.963] (II) NVIDIA(0): may not be running or the "AcpidSocketPath" X
[ 306.963] (II) NVIDIA(0): configuration option may not be set correctly. When the
[ 306.963] (II) NVIDIA(0): ACPI event daemon is available, the NVIDIA X driver will
[ 306.963] (II) NVIDIA(0): try to use it to receive ACPI event notifications. For
[ 306.963] (II) NVIDIA(0): details, please see the "ConnectToAcpid" and
[ 306.963] (II) NVIDIA(0): "AcpidSocketPath" X configuration options in Appendix B: X
[ 306.963] (II) NVIDIA(0): Config Options in the README.
[ 306.975] (II) NVIDIA(0): Setting mode "DFP-0:1280x1024+0+0"
[ 306.998] (==) NVIDIA(0): Disabling shared memory pixmaps
[ 306.998] (==) NVIDIA(0): Backing store enabled
[ 306.998] (==) NVIDIA(0): Silken mouse enabled
[ 306.998] (==) NVIDIA(0): DPMS enabled
[ 306.998] (WW) NVIDIA(0): Option "TwinView" is not used
[ 306.998] (II) Loading sub module "dri2"
[ 306.998] (II) LoadModule: "dri2"
[ 306.998] (II) Module "dri2" already built-in
[ 306.998] (II) NVIDIA(0): [DRI2] Setup complete
[ 306.998] (II) NVIDIA(0): [DRI2] VDPAU driver: nvidia
[ 306.998] (II) Initializing extension Generic Event Extension
[ 306.998] (II) Initializing extension SHAPE
[ 306.998] (II) Initializing extension MIT-SHM
[ 306.998] (II) Initializing extension XInputExtension
[ 306.999] (II) Initializing extension XTEST
[ 306.999] (II) Initializing extension BIG-REQUESTS
[ 306.999] (II) Initializing extension SYNC
[ 306.999] (II) Initializing extension XKEYBOARD
[ 306.999] (II) Initializing extension XC-MISC
[ 306.999] (II) Initializing extension SECURITY
[ 306.999] (II) Initializing extension XFIXES
[ 306.999] (II) Initializing extension RENDER
[ 306.999] (II) Initializing extension RANDR
[ 306.999] (II) Initializing extension COMPOSITE
[ 306.999] (II) Initializing extension DAMAGE
[ 306.999] (II) Initializing extension MIT-SCREEN-SAVER
[ 306.999] (II) Initializing extension DOUBLE-BUFFER
[ 306.999] (II) Initializing extension RECORD
[ 306.999] (II) Initializing extension DPMS
[ 306.999] (II) Initializing extension Present
[ 307.000] (II) Initializing extension DRI3
[ 307.000] (II) Initializing extension X-Resource
[ 307.000] (II) Initializing extension XVideo
[ 307.000] (II) Initializing extension XVideo-MotionCompensation
[ 307.000] (II) Initializing extension SELinux
[ 307.000] (II) SELinux: Disabled on system
[ 307.000] (II) Initializing extension GLX
[ 307.000] (II) Initializing extension GLX
[ 307.000] (II) Indirect GLX disabled.
[ 307.000] (II) GLX: Another vendor is already registered for screen 0
[ 307.000] (II) Initializing extension XFree86-VidModeExtension
[ 307.000] (II) Initializing extension XFree86-DGA
[ 307.000] (II) Initializing extension XFree86-DRI
[ 307.000] (II) Initializing extension DRI2
[ 307.000] (II) Initializing extension NV-GLX
[ 307.000] (II) Initializing extension NV-CONTROL
[ 307.000] (II) Initializing extension XINERAMA
[ 307.019] (II) config/udev: Adding input device Power Button (/dev/input/event3)
[ 307.019] (II) No input driver specified, ignoring this device.
[ 307.019] (II) This device may have been added with another device file.
[ 307.019] (II) config/udev: Adding input device Power Button (/dev/input/event2)
[ 307.019] (II) No input driver specified, ignoring this device.
[ 307.019] (II) This device may have been added with another device file.
[ 307.019] (II) config/udev: Adding input device Sleep Button (/dev/input/event1)
[ 307.019] (II) No input driver specified, ignoring this device.
[ 307.019] (II) This device may have been added with another device file.
[ 307.019] (II) config/udev: Adding input device HDA NVidia HDMI/DP,pcm=3 (/dev/input/event5)
[ 307.019] (II) No input driver specified, ignoring this device.
[ 307.019] (II) This device may have been added with another device file.
[ 307.019] (II) config/udev: Adding input device HDA NVidia HDMI/DP,pcm=7 (/dev/input/event6)
[ 307.019] (II) No input driver specified, ignoring this device.
[ 307.019] (II) This device may have been added with another device file.
[ 307.020] (II) config/udev: Adding input device HDA NVidia HDMI/DP,pcm=8 (/dev/input/event7)
[ 307.020] (II) No input driver specified, ignoring this device.
[ 307.020] (II) This device may have been added with another device file.
[ 307.020] (II) config/udev: Adding input device HDA NVidia HDMI/DP,pcm=9 (/dev/input/event8)
[ 307.020] (II) No input driver specified, ignoring this device.
[ 307.020] (II) This device may have been added with another device file.
[ 307.020] (II) config/udev: Adding input device HDA NVidia HDMI/DP,pcm=10 (/dev/input/event9)
[ 307.020] (II) No input driver specified, ignoring this device.
[ 307.020] (II) This device may have been added with another device file.
[ 307.020] (II) config/udev: Adding input device ASRock LED Controller (/dev/input/event0)
[ 307.020] (II) No input driver specified, ignoring this device.
[ 307.020] (II) This device may have been added with another device file.
[ 307.020] (II) config/udev: Adding input device ASRock LED Controller (/dev/input/js0)
[ 307.020] (II) No input driver specified, ignoring this device.
[ 307.020] (II) This device may have been added with another device file.
[ 307.021] (II) config/udev: Adding input device HDA Intel PCH Front Mic (/dev/input/event10)
[ 307.021] (II) No input driver specified, ignoring this device.
[ 307.021] (II) This device may have been added with another device file.
[ 307.021] (II) config/udev: Adding input device HDA Intel PCH Rear Mic (/dev/input/event11)
[ 307.021] (II) No input driver specified, ignoring this device.
[ 307.021] (II) This device may have been added with another device file.
[ 307.021] (II) config/udev: Adding input device HDA Intel PCH Line (/dev/input/event12)
[ 307.021] (II) No input driver specified, ignoring this device.
[ 307.021] (II) This device may have been added with another device file.
[ 307.021] (II) config/udev: Adding input device HDA Intel PCH Line Out (/dev/input/event13)
[ 307.021] (II) No input driver specified, ignoring this device.
[ 307.021] (II) This device may have been added with another device file.
[ 307.021] (II) config/udev: Adding input device HDA Intel PCH Front Headphone (/dev/input/event14)
[ 307.021] (II) No input driver specified, ignoring this device.
[ 307.021] (II) This device may have been added with another device file.
[ 307.021] (II) config/udev: Adding input device PC Speaker (/dev/input/event4)
[ 307.021] (II) No input driver specified, ignoring this device.
[ 307.021] (II) This device may have been added with another device file.
[ 390.739] (II) NVIDIA(0): ACPI: failed to connect to the ACPI event daemon; the daemon
[ 390.739] (II) NVIDIA(0): may not be running or the "AcpidSocketPath" X
[ 390.739] (II) NVIDIA(0): configuration option may not be set correctly. When the
[ 390.739] (II) NVIDIA(0): ACPI event daemon is available, the NVIDIA X driver will
[ 390.739] (II) NVIDIA(0): try to use it to receive ACPI event notifications. For
[ 390.739] (II) NVIDIA(0): details, please see the "ConnectToAcpid" and
[ 390.739] (II) NVIDIA(0): "AcpidSocketPath" X configuration options in Appendix B: X
[ 390.739] (II) NVIDIA(0): Config Options in the README.
[ 390.739] (--) NVIDIA(GPU-0): AOC 2369M (DFP-0): connected
[ 390.739] (--) NVIDIA(GPU-0): AOC 2369M (DFP-0): Internal TMDS
[ 390.739] (--) NVIDIA(GPU-0): AOC 2369M (DFP-0): 600.0 MHz maximum pixel clock
[ 390.739] (--) NVIDIA(GPU-0):
[ 390.739] (--) NVIDIA(GPU-0): DFP-1: disconnected
[ 390.739] (--) NVIDIA(GPU-0): DFP-1: Internal TMDS
[ 390.739] (--) NVIDIA(GPU-0): DFP-1: 165.0 MHz maximum pixel clock
[ 390.739] (--) NVIDIA(GPU-0):
[ 390.760] (II) NVIDIA(0): Setting mode "DFP-0:1280x1024+0+0"
[ 390.781] (==) NVIDIA(0): Disabling shared memory pixmaps
[ 390.781] (==) NVIDIA(0): DPMS enabled
[ 390.781] (II) Loading sub module "dri2"
[ 390.781] (II) LoadModule: "dri2"
[ 390.781] (II) Module "dri2" already built-in
[ 390.781] (II) NVIDIA(0): [DRI2] Setup complete
[ 390.781] (II) NVIDIA(0): [DRI2] VDPAU driver: nvidia
[ 390.781] (II) Initializing extension Generic Event Extension
[ 390.781] (II) Initializing extension SHAPE
[ 390.781] (II) Initializing extension MIT-SHM
[ 390.781] (II) Initializing extension XInputExtension
[ 390.781] (II) Initializing extension XTEST
[ 390.781] (II) Initializing extension BIG-REQUESTS
[ 390.782] (II) Initializing extension SYNC
[ 390.782] (II) Initializing extension XKEYBOARD
[ 390.782] (II) Initializing extension XC-MISC
[ 390.782] (II) Initializing extension SECURITY
[ 390.782] (II) Initializing extension XFIXES
[ 390.782] (II) Initializing extension RENDER
[ 390.782] (II) Initializing extension RANDR
[ 390.782] (II) Initializing extension COMPOSITE
[ 390.782] (II) Initializing extension DAMAGE
[ 390.782] (II) Initializing extension MIT-SCREEN-SAVER
[ 390.782] (II) Initializing extension DOUBLE-BUFFER
[ 390.782] (II) Initializing extension RECORD
[ 390.782] (II) Initializing extension DPMS
[ 390.782] (II) Initializing extension Present
[ 390.783] (II) Initializing extension DRI3
[ 390.783] (II) Initializing extension X-Resource
[ 390.783] (II) Initializing extension XVideo
[ 390.783] (II) Initializing extension XVideo-MotionCompensation
[ 390.783] (II) Initializing extension SELinux
[ 390.783] (II) SELinux: Disabled on system
[ 390.783] (II) Initializing extension GLX
[ 390.783] (II) Initializing extension GLX
[ 390.783] (II) Indirect GLX disabled.
[ 390.783] (II) GLX: Another vendor is already registered for screen 0
[ 390.783] (II) Initializing extension XFree86-VidModeExtension
[ 390.783] (II) Initializing extension XFree86-DGA
[ 390.783] (II) Initializing extension XFree86-DRI
[ 390.783] (II) Initializing extension DRI2
[ 390.783] (II) Initializing extension NV-GLX
[ 390.783] (II) Initializing extension NV-CONTROL
[ 390.783] (II) Initializing extension XINERAMA
[ 390.801] (II) config/udev: Adding input device Power Button (/dev/input/event3)
[ 390.801] (II) No input driver specified, ignoring this device.
[ 390.801] (II) This device may have been added with another device file.
[ 390.801] (II) config/udev: Adding input device Power Button (/dev/input/event2)
[ 390.801] (II) No input driver specified, ignoring this device.
[ 390.801] (II) This device may have been added with another device file.
[ 390.802] (II) config/udev: Adding input device Sleep Button (/dev/input/event1)
[ 390.802] (II) No input driver specified, ignoring this device.
[ 390.802] (II) This device may have been added with another device file.
[ 390.802] (II) config/udev: Adding input device HDA NVidia HDMI/DP,pcm=3 (/dev/input/event5)
[ 390.802] (II) No input driver specified, ignoring this device.
[ 390.802] (II) This device may have been added with another device file.
[ 390.802] (II) config/udev: Adding input device HDA NVidia HDMI/DP,pcm=7 (/dev/input/event6)
[ 390.802] (II) No input driver specified, ignoring this device.
[ 390.802] (II) This device may have been added with another device file.
[ 390.802] (II) config/udev: Adding input device HDA NVidia HDMI/DP,pcm=8 (/dev/input/event7)
[ 390.802] (II) No input driver specified, ignoring this device.
[ 390.802] (II) This device may have been added with another device file.
[ 390.802] (II) config/udev: Adding input device HDA NVidia HDMI/DP,pcm=9 (/dev/input/event8)
[ 390.802] (II) No input driver specified, ignoring this device.
[ 390.802] (II) This device may have been added with another device file.
[ 390.803] (II) config/udev: Adding input device HDA NVidia HDMI/DP,pcm=10 (/dev/input/event9)
[ 390.803] (II) No input driver specified, ignoring this device.
[ 390.803] (II) This device may have been added with another device file.
[ 390.803] (II) config/udev: Adding input device ASRock LED Controller (/dev/input/event0)
[ 390.803] (II) No input driver specified, ignoring this device.
[ 390.803] (II) This device may have been added with another device file.
[ 390.803] (II) config/udev: Adding input device ASRock LED Controller (/dev/input/js0)
[ 390.803] (II) No input driver specified, ignoring this device.
[ 390.803] (II) This device may have been added with another device file.
[ 390.803] (II) config/udev: Adding input device HDA Intel PCH Front Mic (/dev/input/event10)
[ 390.803] (II) No input driver specified, ignoring this device.
[ 390.803] (II) This device may have been added with another device file.
[ 390.803] (II) config/udev: Adding input device HDA Intel PCH Rear Mic (/dev/input/event11)
[ 390.803] (II) No input driver specified, ignoring this device.
[ 390.803] (II) This device may have been added with another device file.
[ 390.804] (II) config/udev: Adding input device HDA Intel PCH Line (/dev/input/event12)
[ 390.804] (II) No input driver specified, ignoring this device.
[ 390.804] (II) This device may have been added with another device file.
[ 390.804] (II) config/udev: Adding input device HDA Intel PCH Line Out (/dev/input/event13)
[ 390.804] (II) No input driver specified, ignoring this device.
[ 390.804] (II) This device may have been added with another device file.
[ 390.804] (II) config/udev: Adding input device HDA Intel PCH Front Headphone (/dev/input/event14)
[ 390.804] (II) No input driver specified, ignoring this device.
[ 390.804] (II) This device may have been added with another device file.
[ 390.804] (II) config/udev: Adding input device PC Speaker (/dev/input/event4)
[ 390.804] (II) No input driver specified, ignoring this device.
[ 390.804] (II) This device may have been added with another device file.
Where is the problem?
|
My xorg.conf is like this
Section "ServerLayout"
Identifier "Default Layout"
Screen 0 "Screen0" 0 0
InputDevice "Mouse0" "CorePointer"
InputDevice "Keyboard0" "CoreKeyboard"
EndSection
Section "InputDevice"
Identifier "Mouse0"
Driver "mouse"
Option "Protocol" "auto"
Option "Device" "/dev/input/mice"
Option "Emulate3Buttons" "no"
Option "ZAxisMapping" "4 5"
EndSection
Section "InputDevice"
Identifier "Keyboard0"
Driver "kbd"
Option "XkbModel" "pc105"
Option "XkbLayout" "us"
EndSection
Section "Monitor"
Identifier "Monitor0"
VendorName "Unknown"
ModelName "Unknown"
HorizSync 20.0 - 120.0
VertRefresh 30.0 - 120.0
EndSection
Section "Device"
Identifier "Device0"
Driver "nvidia"
VendorName "NVIDIA Corporation"
BoardName "Quadro FX 380"
Option "ConnectedMonitor" "DFP"
Option "UseDisplayDevice" "DFP-0"
Option "CustomEDID" "DFP-0:/etc/X11/HPZ24nq.bin"
BusID "PCI:21:0:0"
EndSection
Section "Screen"
Identifier "Screen0"
Device "Device0"
Monitor "Monitor0"
DefaultDepth 24
Option "TwinView" "0"
Option "metamodes" "DFP-0: 1280x1024 +0+0"
SubSection "Display"
Depth 24
EndSubSection
EndSection
The parameters that need changes for your computer are at least BusID, DFP, DFP-0, /etc/X11/HPZ24nq.bin.
I used edid file HPZ24nq.bin, which I got from some monitor. You will be able to set resolutions that are supported in an EDID file. You can get EDID file from monitor with read-edid.
BusID you can get with lspci. I am not sure if you need that.
| Best way to do rendering on Linux server with GPU but without a display? |
1,283,280,001,000 |
On an older laptop that runs Fedora 18, and has a Intel Corporation 82852/855GM Integrated Graphics Device, I cannot get OpenGL to run on it.
In fact, I have no 3D acceleration whatsoever.
I have the correct drivers installed, but I can't get OpenGL to work.
glxinfo reports:
name of display: :0.0
X Error of failed request: BadAlloc (insufficient resources for operation)
Major opcode of failed request: 152 (GLX)
Minor opcode of failed request: 3 (X_GLXCreateContext)
Serial number of failed request: 22
Current serial number in output stream: 25
In fact no GLX application will start, all stop with the same error message.
Do I have to enable it in the Kernel or put something into xorg.conf?
As it has been requested, here's my xorg.0.log, and the ouptut of lspci.
|
Actually, the problem sorted itself. There was an update of Mesa a couple of weeks ago, and it now works beautifully.
Just thought I'd share! :3
| How to enable OpenGL rendering on Fedora 18 with an Intel Corporation 82852/855GM? [closed] |
1,283,280,001,000 |
I have an Intel graphics card, of which I can't tell whether or not it's being recognized. When I go through GUI to the System Info section, it states that my graphics card is unknown. Despite this, my lspci displays the graphics card perfectly fine.
Question
Is there any way to get OpenGL working correctly on Ubuntu 11.10, or, if this isn't possible (due to my graphics card), would there be any other distros recommended as far as getting this work?
lspci
00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09)
00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09)
00:16.0 Communication controller: Intel Corporation 6 Series/C200 Series Chipset Family MEI Controller #1 (rev 04)
00:1a.0 USB Controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 (rev 04)
00:1b.0 Audio device: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller (rev 04)
00:1c.0 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 1 (rev b4)
00:1c.1 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 2 (rev b4)
00:1d.0 USB Controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 (rev 04)
00:1f.0 ISA bridge: Intel Corporation HM65 Express Chipset Family LPC Controller (rev 04)
00:1f.2 SATA controller: Intel Corporation 6 Series/C200 Series Chipset Family 6 port SATA AHCI Controller (rev 04)
00:1f.3 SMBus: Intel Corporation 6 Series/C200 Series Chipset Family SMBus Controller (rev 04)
02:00.0 Ethernet controller: Broadcom Corporation NetLink BCM57785 Gigabit Ethernet PCIe (rev 10)
02:00.1 SD Host controller: Broadcom Corporation NetXtreme BCM57765 Memory Card Reader (rev 10)
02:00.2 System peripheral: Broadcom Corporation Device 16be (rev 10)
02:00.3 System peripheral: Broadcom Corporation Device 16bf (rev 10)
03:00.0 Network controller: Intel Corporation Centrino Advanced-N 6205 (rev 34)
|
Just because system settings doesn't correctly report your graphics card, doesn't mean it's operating incorrectly. I'm not sure what you mean by 'Getting OpenGL to work', but I'm guessing you mean Verifying accelerated OpenGL?
The two major components of basic acceleration are Xv, and GLX Direct Rendering. Xv does Hardware Video Overlay, and GLX Direct Rendering does OpenGL via Graphics Hardware.
Use glxinfo to check for GLX Direct Rendering, glxinfo is installed via
sudo apt-get install mesa-utils
Run this command to get your direct rendering status, it will be a yes or no answer.
glxinfo | grep rendering
Use xvinfo to check for Video Overlay. Make sure x11-utils package is installed.
sudo apt-get install x11-utils
Run this command to check for Video Overlay, should be a long listing, not an error.
xvinfo
This is just the basics, it doesn't verify the new HD video extensions. It should however, tell you if acceleration is even working.
| Getting OpenGL to work under Ubuntu |
1,283,280,001,000 |
For many years I've been in the habit of, when away from home, VNC-ing (over ssh) back into my "home machine" and running Evolution (for email, contacts etc) in a tightvncserver environment there.
This worked fine when the home machine was running Debian/Squeeze, but now it's been updated to Wheezy, attempting to start Evolution in the VNC server session outputs:
Xlib: extension "GLX" missing on display ":1".
Failed to connected to any renderer:
XServer appears to lack required GLX support
tightvncserver not supporting GLX isn't a surprise, but Evolution moving to a GL backend (through its use of the "clutter" toolkit?) is. (Just to be clear: Evolution works absolutely fine on the desktop; the machine has the nvidia drivers via DKMS.)
What are the prospects for working round this? I'm thinking along the lines of:
Is there a command line option to Evolution which will fix this?
Is there some way of getting GLX support in a VNC server? (Alternatives to tightvncserver which support it?)
Note that I tend to be VNC-ed in over high-latency/low-bandwidth links; I've tried running Evolution remotely over X11 before and it's not a good experience; VNC works much better.
Debian-friendly (apt-get -able) solutions preferred.
|
I see that in Debian9 ("stretch") something called tigervnc-standalone-server has appeared. This appears to include some OpenGL support (I note mesa is a dependency). I had no problems installing it on a fresh Debian9 install and firing it up got me a VNC-able (any VNC client) standalone (not "screen scraped") desktop running Gnome (no messing around with .Xsession files needed) which seems to have no problems running evolution or glxgears. No virtualgl needed or even installed on the machine. Very nice! (Although I strongly suspect this solution is probably using SW rendering, whereas with virtualgl I was using the GPU; for primarily 2D desktop apps on modern CPUs, this is just fine though).
Note that the tigervnc server doesn't accept remote (non-localhost) connections by default (although there is a commandline option to override that); this is supposed to (sensibly) encourage you to use ssh tunnelling!
| How to get Evolution to run in VNC on Debian/Wheezy (or later)? |
1,283,280,001,000 |
How can I configure Qubes OS that a VM supports opengl for my nvidia graphics on my laptop (optimus)?
The laptop has a integrated intel card and a quadro K2000M. At least the quadro 2000 seems to be supported by Xen passthrough: http://wiki.xen.org/wiki/XenVGAPassthroughTestedAdapters
|
No, Qubes OS does not support OpenGL or any accelerated graphics of any kind in a VM. The document located at https://www.qubes-os.org/doc/user-faq/ states:
Can I run applications, like games, which require 3D support?
Those won’t fly. We do not provide OpenGL virtualization for Qubes. This is mostly a security decision, as implementing such a feature would most likely introduce a great deal of complexity into the GUI virtualization infrastructure. However, Qubes does allow for the use of accelerated graphics (OpenGL) in Dom0’s Window Manager, so all the fancy desktop effects should still work.
| Does Qubes OS support opengl on discrete graphics? |
1,283,280,001,000 |
I need to do headless hardware accelerated server rendering using OpenGL and found out that this is possible with pbuffers and frame buffer objects (FBO). But today these approaches still need a context and can't run without a running X server.
I found a (now deleted, but on web archive) presentation from Sun about exactly what I want to do with the title "The GLP OpenGL Extension, OpenGL Rendering Without A Window System".
What happened to this proposal and are there any alternatives today or any similar developments underway?
|
Recent versions of opensource Linux OpenGL drivers (that is, drivers provided by Mesa [1]) support rendering on headless machines without a window system. The Intel Mesa team (to which I belong) uses this feature to run OpenGL tests on headless machines with no X server.
A coworker and I added the support for headless rendering to Mesa's testsuite, Piglit [2], by using the Waffle [3] framework atop libEGL's GBM backend. (GBM stands for Generic Buffer Manager, and is used to manage GPU buffers without an intermediary display server).
A possible showstopper, if you wish to pursue this approach, is that to my knowledge EGL with GBM is only supported by opensource Linux drivers. If your appplication must support another Unix or proprietary drivers, this approach won't work.
If you're interested pursuing using EGL/GBM, I can point you to some example code.
(By the way, if you're unfamiliar with EGL, it's a modern replacement for GLX whose API is independent of window system. If you're comfortable with GLX, then you should feel at home with EGL because the two API's are very similar).
[1] http://mesa3d.org
[2] http://piglit.freedesktop.org
[3] http://people.freedesktop.org/~chadversary/waffle
| What happened to the GLP OpenGL extension? |
1,283,280,001,000 |
Problem
After a recent system update (on Fedora 25) I have some problems with my graphics card (GeForce 1060, using the proprietary driver from RPM Fusion), so I wanted to get diagnostics information using glxinfo.
However, glxinfo can't find libGL:
glxinfo: error while loading shared libraries: libGL.so.1: cannot open shared object file: No such file or directory
What I've tried
Using DNF, I found out that mesa-libGL contains the missing file:
$ dnf repoquery -l mesa-libGL
/usr/lib/libGL.so.1
/usr/lib/libGL.so.1.2.0
/usr/lib/libGLX_mesa.so.0
/usr/lib/libGLX_mesa.so.0.0.0
/usr/lib64/libGL.so.1
/usr/lib64/libGL.so.1.2.0
/usr/lib64/libGLX_mesa.so.0
/usr/lib64/libGLX_mesa.so.0.0.0
This package was already installed but no libGL.so.* exists anywhere on the system and reinstalling the package with dnf reinstall didn't help either (find / -name libGL.so.* doesn't output anything).
Question
Why isn't libGL.so.* installed? Could it have something to do with the Nvidia driver?
|
I found out what the problem was. dnf repoquery -l mesa-libGL outputs the files of all package versions. In this case, libGL.so.1 is only included in mesa-libGL-12.0.3-3.fc25.i686, which is not the version I have installed. Apparently, the package authors changed some dependencies and libGL.so.1 is now part of libglvnd-glx:
$ dnf repoquery -l libglvnd-glx.x86_64
/usr/lib64/libGL.so.1
/usr/lib64/libGL.so.1.0.0
/usr/lib64/libGLX.so.0
/usr/lib64/libGLX.so.0.0.0
After reinstalling the package, glxinfo works again.
| Missing libGL on Fedora, cannot install it |
1,485,111,614,000 |
Context
I recently updated my nVidia drivers to 375.26 and recompiled FFmpeg N-83180-gcf3affa and OBS 17.0.2-5-g43e4a2e (sorry if these numbers don't mean anything, I'm not quite sure what version numbers are significant) on my Debian machine. Doing a suspend to RAM will cause OBS to stop working with its only fix to reboot the machine.
How to reproduce
Run OBS
Output Configuration:
Set output to NVENC H.264 and .mp4
Use CBR
Bitrate = 200K
Kf interval = 0
Low latency, High quality preset, main, auto
2 pass encoding enabled
GPU = 0
B-frames = 0
Start recording and stop to confirm that it works
Go to login actions and click suspend
Turn on and login again
Start recording, OBS fails with this error:
[h264_nvenc @ 0x3fdd1e0] Failed creating CUDA context for NVENC: 0x3e7
[h264_nvenc @ 0x3fdd1e0] No NVENC capable devices found
System info
Drivers/Software versions listed above
GPU: MSI GTX 970
uname -a: Linux version 3.16.0-4-amd64, #1 SMP Debian 3.16.39-1 (2016-12-30)
OS: Debian 8.7 Jessie
I use XFCE 4.10 if that makes a difference to how the action buttons work.
Question
Is there any way to short of rebooting every time to avoid getting this error after waking the computer?
Edit 1
I know for a fact that OBS is the source of this problem.
Test case 1:
Start computer, use ffmpeg's h264_nvenc encoder to output a video file
Suspend to RAM
Login, successfully repeat step 1
Test case 2:
Start computer, use OBS to record a video with h264_nvenc
Quit OBS
Suspend to RAM
Login, successfully repeat step 2
Test case 3:
Start computer, use OBS to record video with h264_nvenc
Suspend to RAM
Login, fails with Cannot init CUDA
My guess is that OBS does not close its streams when a recording is stopped, it probably is persisted for performance (?) reasons until you exit the program? I have no clue how to fix this. Restarting OBS has no effect once the error shows up, you must reboot the system.
It appears that the GPU is completely fine at handling everything else nonetheless, glxinfo, nvidia-smi, nvidia-settings all confirm that the GPU is indeed being utilized to process other tasks. It seems the NVENC is the only thing that has trouble after the suspend to RAM.
Edit 2
Here are the dmesg logs: https://www.diffchecker.com/wto7KPJZ
Tabbed "original" were what changed after doing the suspend, tabbed "changed" were what changed after doing the fix that I suggested.
Full dmesg output: https://0paste.com/10601#hl
|
FFmpeg only locks up on CUDA init if an h264_nvenc stream was started (and stopped, but this is not needed) before putting the system into suspend. If OBS never records anything with the h264_nvenc encoder before suspend, it will work fine when you login again.
If OBS locks up after logging in, it will become usable by:
Exit OBS
Run in terminal:
sudo rmmod nvidia_uvm && sudo modprobe nvidia_uvm
Open OBS again
???
Profit
If unloading nvidia_uvm doesn't work, the DRM and modeset modules may need to be reloaded as well, although I've never had this problem.
| CUDA Context for NVENC not found after system suspend |
1,485,111,614,000 |
I recently got into OpenGL and I was researching on how opengl can either render directly or indirectly using the X window system.
What I understood was that, in order to render directly it uses something known as the DRI (Direct Rendering Infrastructure). The DRI then makes use of the DRM (Direct Rendering Manager) to access the hardware directly.. (I don't know if I got that right)
Why do we even need OpenGL, why can't we access the hardware directly using the DRM?
|
What is OpenGL and what is graphics card?
OpenGL is high level 3D graphics library.
Although it is used to talk with "graphics card", or rather a GPU, it abstracts many differences between the many 3D capable cards into much simpler and more coherent OpenGL "concept space". OpenGL does not care whether computer has PCI, is a SoC, how many control registers card has or any other unimportant hardware details.
Modern OpenGL capable GPU, like from nvidia, ati or in a mobile chipset, is not really "a graphics" card even, per se.
In fact, actual "graphics card" of the modern GPU is like 0.01% of it's circuitry these days.
In reality, modern GPU is very complicated "3D graphics engine". So much that 99.99% of the GPU chip is this engine.
We are not talking about low level primitives like pixels either.
For example, almost all past GPUs (circa '00) had so called "2D blitter" engines, actual physical circuitry, for bitmap transfer ie "blitting".
Older consoles like SNES have similar circuitry (to blitter) which could transfer, transform, stretch and rotate an image sprite (in a single step) into framebuffer memory. We tend to call that circuitry "a rotozoomer". Because it was expressed in hardware, SNES could easily do 60fps fluid graphics even while being 8/16-bit computer (naturally number of sprites was limited by a design of a given chip).
Modern GPUs don't even have "2D blitters" or "rotozoomers" anymore, as when you draw 3D quad polygon composed from 2 traingles in specifically configured ortographic projection, the GPU's 3D engine can essentially work as blitter/rotozoomer of old, but because it is being processed by actual 3D circuitry, you have incredible amount of flexibility: fragment shaders, vertex shaders, Z-buffer, stencil buffer, vertex arrays etc. Think of it as hyper-duper-rotozoomer on steroids.
In fact, most GPUs are so advanced these days, that they can draw, texture and shade complex 3D models in insane number of instances on screen completely autonomously - i.e. modern GPUs draw whole secenes on their own!
They are, in fact, their own computers within computers. Think about it. The GPU actually understands(!) model data (edges lists, polygons/triangle lists, textures, transformation matrices, and these days even tensors), model transformations, perspective projection transformations and so on.
Even more, all of this is "programmable"(!), i.e. you can compose "micro-programs", compile them into card processor(s) instructions and "upload" them (non-permanently) into the card itself.
OpenGL expresses all these operations in a programming language agnostic form. For example it contains fully optimizing shading language compiler not much different from gcc or clang (llvm), but you don't even get to see fragment shader's assembly instructions. There is simply no need.
But at a lower level, a system level, all these model data and shader programs have to get into the card, somehow. You also have to "drive" the card around (ie send commands to it: draw this, draw that, clear screen). Some of these data (like compiled shader programs and model data) come in form of memory blocks/chunks. Other things, like commands, often work through GPU registers, as in CPU.
On PC style computers, with PCI technology, all of this is usually done by mapping the card's register sets and "memory windows" (data transfer holes) from PCI space into processor memory space. But it might be done completely differently in a tablet or a phone (many SoCs don't even have PCI).
Such details are of no interest to you as a graphics programmer.
Traditional "unix" "graphics"
Now, traditionally, "unix" (BSDs, Linux, Solaris etc) did not care a zilch about graphics. X11 server was tool designed for that.
In fact in old "unix" systems, there was not even a concept of a display driver: "unix" did not even know, what a graphics card is!
To it and it's kernel it was useless, dead device, draining power, unlike a teletype or a disk drive, about which it knew quite a lot :).
How did X do drawing then, you might ask?
With X, unix system was actually extended (and this extension was really tiny - unlike DRI on modern systems) in a away that allowed it to map PCI memory mentioned above, into the address space of normal usermode process: i.e. X server.
And thus, through this "hole", it was X that actually talked and "drove" the card directly. Think about it for a second - it's kind of microkernel/FUSE like design.
This worked...
... for a time ...
... but it turned out, that this did not work as well as we thought it would.
It was sufficient at the beginning, but as cards and GUIs got more complex, this design did not cope well with load and usage requirements.
The initial design is beautiful:
crashing X won't crash the system.
programs ask X to draw, and X arbitrates the drawing among them and drives the hardware directly.
because communication with X goes over a socket, the drawing works even over the network (so much for memed to death network transparency).
Situation "on the ground" was simple as well:
graphics cards really were just a few components: framebuffer, character generator, display controller and if it was a fancy card, a blitter.
applications issued mostly simple commands to draw simple vectorial 2D graphics.
X is the "knowledge point" of graphics algorithms (today it is a combination of GPU's circuitry and shader code - both live outside of X).
It turned out we need/wanted more...
Ill effects of usermode X
First, usermode program driving hardware directly introduced latencies and all kind of weird issues.
Like, for example, sometimes graphics cards generate interrupts - how do you propagate interrupt from kernel from a device, that it does not care about, into usermode X server process?
Sometimes response to this interrupt has to be instantaneous, but what if Xorg is currently scheduled out and ssh is handling incoming traffic at the moment?
Or another thing: to draw anything, you have to switch from your program's usermode, to kernel mode, to X's usermode and back (latency). In fact, it is even worse, you will see.
Many issues like this started popping up.
Second, X needed it's own drivers to talk to these cards: these were not kernel drivers, but kind of like FUSE-like X drivers for usermode X to talk with hardware. Kinda mess.
Third, slowly but surely, it was discovered that kernel, if it is to stay relevant, needs to know what graphics (if you want to have graphical boot screen, kernel needs to know about graphics modes and monitors etc - some devices like phones don't even have textmode graphics). So kernel started growing it's own drivers anyway - thus at a time, there started to be a duplication of drivers for any card - one for X and other one for the kernel.
Fourth, modern "graphics cards" are not graphics cards anymore - these are intelligent engines that know all kinds of interesting objects: fragment shaders, vertex shaders, textures, vertex buffers, display buffers, displays, monitors ... list is insane.
In modern multitasking unix-like OS there are potentially multiple applications running, at the same time, under different users, on different displays, many using such hardware objects in the card concurrently - ie you need accounting (allocation/deallocation), acces control and so on.
X manages all these objects among applications from user space - but when X crashes, what happens to these objects? They stay allocated on the card, but the system has lost all the accounting information about them. Only way to recover from this is to reset the card hard, so that it forgets everything and starts anew.
Finally think about how X based socket drawing works. Let's say your application wants to draw a 10K vertex 3D model:
application has to prepare memory buffer for the model data to be held
it has to format the model data into proper format
then send it over unix socket (or network, if you are insane) to X
X has to prepare memory buffer to receive the data
kernel shuffles data over
X has to prepare communications with the card
X has map it's data into the card (ie "send" it to the card)
kernel shuffles data over
X has to retrieve result of the operation, pack it and return it over the X socket
kernel shuffles data over
application has to receive result in message
application can look at message received and learns that operation failed
Keep in mind that any of the action above can be pre-empted by anything else in need of servicing or running (audio player, ssh, disk, etc). More over all of this back and forth takes probably longer, than modern OpenGL game can render a frame with million detailed trees in it (latency).
Modern times: Kernel learns not only to chew bubblegum and walk, but also to paint like Bob Ross too, all at the same time
Thus two new technologies are introduced: KMS and DRI.
KMS - Kernel ModeSetting - it makes kernel aware of graphics cards and that they have: framebuffers, display links, monitors/displays and resolutions.
DRI - Direct Rendering Infrasturture - it extends the kernel with DRM (Direct Rendering Manager) aware of the fact, that some cards are complex 3D graphics engines that can contain objects: buffers/various memory blocks (framebuffers, models, textures, memory holes for "streaming in" data), shaders and other stuff. It juggles around access rights to them too.
After all, kernel can do the best system object accounting (processes, files, memory maps) in the system. DRM extends this to graphics objects. If X (or anything else using it) crashes, kernel will get the the event and will clean all the process related resources from the card itself. No more object leaks, and hard card resets.
All these interfaces are implemented using syscalls, file descriptors and memory maps, instead of sockets (like with X), and go directly into the kernel when needed. So the latency is greatly reduced, in case of in-memory mapped "holes" almost instantaneous.
Because these are syscalls, and other kernel objects, and not buffers sent over sockets, zero copy data transfers at full hardware speeds can be achieved.
Finally you have to understand, that in case of DRI, it is the drawing application itself (NOT X!) that communicates with cards directly.
That's a reason why it's called "direct".
For example, if you are running X and using DRI application, like something OpenGL based, contents that gets drawn into this window usually goes into card through specific application private codepath over DRI, not over X. X just pretends the window maps to it's position in X display.
Modern OpenGL is built on top of DRI. However DRI is not about drawing...
Why not use DRI directly? Because what would be the point?
If you are using OpenGL, you are already using it directly. It's just that in that case it is OpenGL library loads into your process address space that drives DRI.
But for graphics application programmer it makes no sense to bother with DRI, it is simply too low level. DRI deals with system things, like access rights, allocation tracing and cleanup, drawing context switching, commands multiplexing, memory mapping and so on.
There is not much "drawing" going on in DRI. What could a graphics programmer gain from messing with this subsytem? Literally nothing.
Also DRI is strongly unix system specific (all major BSDs use Linux's DRI infrastructure), so it's kinda like Direct3D of unix but at a much lower level, so Direct3D comparison is not really fair, here.
For example, another new API that is using DRI directly is Vulkan. Look at Vulkan code, for how much more complex it is, when compared to same OpenGL code: is is because it allows you to go to much lower level than OpenGL ever did. Yet DRI is at even lower level, and does not deal with "drawing" here either.
Another API that probably uses DRI under the hood, but has nothing to do with 3D graphics and deals with completely orthogonal world of video decoding is VDAPU. Internals of VDAPU are completely disconnected from it's application users: media players.
DRI is directly tied to hardware minutiae of specific cards, and a big part of it DRM and graphics memory managers, lives in kernel and not in application or userland.
It makes no sense for application programmers, even graphics programmers, to reach to DRI level directly, unless they want to do specific system programming tasks (like fixing DRI). If they can do that, they can readily work on open source graphics drivers instead. Or on OpenGL, Vulkan or VDPAU libraries themselves, for that matter.
OpenGL and Vulkan abstract the differences among cards into "thin", cross platform layer, that is expressive more than enough for any task graphics programmers might want to do.
It is of no interest of graphics application programmer on how texture allocation works in some other application running parralel to his, or which texture is bound to specific DRM file descriptor.
All they they care is about driving the card the way they want, by their application.
| What is OpenGL's relationship with the DRM |
1,485,111,614,000 |
I have an AMD 5700XT gpu and I am trying to learn SYCL but I have a lot of doubts of the current state of AMD gpu driver stacks. According to what I read, there are several driver stacks for AMD gpus: mesa, amdgpu and amdgpu-pro. If I understand correctly, mesa has its own opencl implementation and there is another implementation for the amdgpu drivers.
Also, amd has ROCm, which its another OpenCL implementation, HIP, which something like CUDA and some tooling, right?
There is at least 2 implementations, ComputeCpp and hipSYCL, which could possibly run SYCL on AMD gpus. Shouldnt the clang implementation be able to run also on AMD gpus, as according to the image it runs with OpenCL and SPIR-V devices?
In I understand correctly, there is also oneAPI, which is an implementation of SYCL (DPC++) with some extensions (SYCL 2020) and some libraries on top of that SYCL implementation (kind of what cuBLAS or cuSPARSE are to CUDA). Should it be possible to run oneAPI libraries on top of another SYCL implementation?
Fonally, if I use mesa for graphics (OpenGL and Vulkan), is it possible to run ROCm on top of that? How does ROCm and OpenCL mesa implementation interact with mesa graphic drivers?
As you can see I have a big confusion about all the ecosystem. Can someone provide some light on it?
|
My understanding is that there's a single kernel driver, amdgpu. Mesa lives in userland and is an OpenGL implementation. amdgpu-pro provides alternative closed-source userland libraries (OpenGL, OpenCL etc).
clover is an OpenCL implementation on top of Mesa. I'm not sure what state it is in (my impression is that it was not very well maintained for some time and development has stalled), but I doubt it will be able to run SYCL programs.
ROCm is more than an OpenCL implementation. It's AMD's GPGPU platform, providing an AI platform, accelerated libraries, tools, and compilers. It also contains an OpenCL implementation.
HIP is not an OpenCL implementation, it's effectively AMD's implementation of the CUDA programming model.
To my knowledge, unfortunately no recent AMD OpenCL implementation is able to run SYCL programs because AMD neither supports SPIR nor SPIR-V. AMD OpenCL has never supported SPIR-V, so DPC++/clang won't work. ComputeCpp can also work with SPIR, but support for that was removed already some time ago from AMD's OpenCL implementation.
As far as SYCL support for AMD is concerned, hipSYCL usually is the way to go. Unfortunately, AMD does not support your specific GPU on ROCm (on which hipSYCL builds) because they focus ROCm support on chips that are also used in data center cards.
See here for more details:
https://github.com/RadeonOpenCompute/ROCm#supported-gpus
oneAPI is Intel's umbrella term for their compute platform, providing libraries, tools and compilers (similarly to ROCm). DPC++/LLVM SYCL/Intel SYCL is part of oneAPI. All those terms refer to the same thing, namely Intel's implementation of the Khronos SYCL 2020 standard. Pretty much all of Intel's extensions have been merged into the SYCL 2020 specification, so don't think about DPC++ as a separate language.
It is possible to add additional backends to oneAPI libraries, for example Codeplay has done so for NVIDIA. It is also in principle possible to port them to another SYCL implementation. We're working on some groundwork to potentially move in that direction with hipSYCL by improving compatibility between hipSYCL and DPC++:
https://www.urz.uni-heidelberg.de/en/2020-09-29-oneapi-coe-urz
It is not possible to run ROCm on top of mesa. It's a fully independent stack for GPU compute. As far as I know there is no interaction between ROCm and mesa.
| SYCL and OpenCL for AMD gpu |
1,485,111,614,000 |
The problem: I ssh to two remote clusters. By running glxgears, on one cluster I can successfully visualize the rotating gears though with some warning messages (details below), but on the other one it gives Error: couldn't get an RGB, Double-buffered visual and nothing is visualized.
I have very little knowledge about OpenGL or X11 etc, so I'm not sure about
which one my problem is related to, OpenGL (and its driver?), my X11 software (XQuartz) or both.
whether it's a problem with my local machine or the cluster itself. I checked with another user on the cluster and everything works fine with him. He's also using a MacBook but not one with M1 chip and using the macOS 11.X (mine is macOS 12.0.1). I also tried using his shell rc file but it does not work for me.
My local machine
It is a MacBook Pro 16-inch 2021 with Apple M1 Pro chip.
Operating system: macOS 12.0.1 (21A559)
Terminal: Kitty terminal emulator / macOS Terminal
SSH binary located at /usr/bin/ssh, the macOS built-in command.
X11 program: XQuartz 2.8.1 (xorg-server 1.20.11), and since it's on macOS, the output of defaults read org.xquartz.X11 is
{
"NSWindow Frame x11_apps" = "316 70 454 299 0 0 1728 1079 ";
"NSWindow Frame x11_prefs" = "531 375 484 370 0 0 1728 1079 ";
SUHasLaunchedBefore = 1;
SULastCheckTime = "2021-11-10 08:29:50 +0000";
"app_to_run" = "/opt/X11/bin/xterm";
"cache_fonts" = 1;
"done_xinit_check" = 1;
"enable_iglx" = 1;
"enable_test_extensions" = 1;
"login_shell" = "/bin/sh";
"no_auth" = 0;
"nolisten_tcp" = 1;
"startx_script" = "/opt/X11/bin/startx -- /opt/X11/bin/Xquartz";
}
This is just to point out that I do have "enable_iglx" = 1 as lots of solutions I found ask people to run defaults write org.xquartz.X11 enable_iglx -bool true. This does not work for me.
On the cluster where glxgears works
glxgears gives the following messages
libGL: OpenDriver: trying /usr/lib64/dri/tls/swrast_dri.so
libGL: OpenDriver: trying /usr/lib64/dri/swrast_dri.so
libGL: Can't open configuration file /etc/drirc: No such file or directory.
libGL: Can't open configuration file /home/hwu/.drirc: No such file or directory.
libGL: Can't open configuration file /etc/drirc: No such file or directory.
libGL: Can't open configuration file /home/hwu/.drirc: No such file or directory.
libGL error: No matching fbConfigs or visuals found
libGL error: failed to load driver: swrast
40021 frames in 5.2 seconds = 7767.365 FPS
glxinfo gives the following messages
libGL: OpenDriver: trying /usr/lib64/dri/tls/swrast_dri.so
libGL: OpenDriver: trying /usr/lib64/dri/swrast_dri.so
libGL: Can't open configuration file /etc/drirc: No such file or directory.
libGL: Can't open configuration file /home/hwu/.drirc: No such file or directory.
libGL: Can't open configuration file /etc/drirc: No such file or directory.
libGL: Can't open configuration file /home/hwu/.drirc: No such file or directory.
libGL error: No matching fbConfigs or visuals found
name of display: localhost:12.0
libGL error: failed to load driver: swrast
display: localhost:12 screen: 0
direct rendering: No (If you want to find out why, try setting LIBGL_DEBUG=verbose)
server glx vendor string: SGI
server glx version string: 1.4
server glx extensions:
GLX_ARB_multisample, GLX_EXT_import_context, GLX_EXT_visual_info,
GLX_EXT_visual_rating, GLX_OML_swap_method, GLX_SGIS_multisample,
GLX_SGIX_fbconfig, GLX_SGIX_pbuffer, GLX_SGIX_visual_select_group,
GLX_SGI_make_current_read
client glx vendor string: Mesa Project and SGI
client glx version string: 1.4
client glx extensions:
GLX_ARB_context_flush_control, GLX_ARB_create_context,
GLX_ARB_create_context_profile, GLX_ARB_create_context_robustness,
GLX_ARB_fbconfig_float, GLX_ARB_framebuffer_sRGB,
GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_buffer_age,
GLX_EXT_create_context_es2_profile, GLX_EXT_create_context_es_profile,
GLX_EXT_fbconfig_packed_float, GLX_EXT_framebuffer_sRGB,
GLX_EXT_import_context, GLX_EXT_texture_from_pixmap, GLX_EXT_visual_info,
GLX_EXT_visual_rating, GLX_INTEL_swap_event, GLX_MESA_copy_sub_buffer,
GLX_MESA_multithread_makecurrent, GLX_MESA_query_renderer,
GLX_MESA_swap_control, GLX_OML_swap_method, GLX_OML_sync_control,
GLX_SGIS_multisample, GLX_SGIX_fbconfig, GLX_SGIX_pbuffer,
GLX_SGIX_visual_select_group, GLX_SGI_make_current_read,
GLX_SGI_swap_control, GLX_SGI_video_sync
GLX version: 1.4
GLX extensions:
GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_import_context,
GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_OML_swap_method,
GLX_SGIS_multisample, GLX_SGIX_fbconfig, GLX_SGIX_pbuffer,
GLX_SGIX_visual_select_group, GLX_SGI_make_current_read
OpenGL vendor string: Apple
OpenGL renderer string: Apple M1 Pro
OpenGL version string: 1.4 (2.1 Metal - 76.1)
OpenGL extensions:
GL_APPLE_packed_pixels, GL_ARB_depth_texture, GL_ARB_draw_buffers,
GL_ARB_fragment_program, GL_ARB_fragment_program_shadow, GL_ARB_imaging,
GL_ARB_multisample, GL_ARB_multitexture, GL_ARB_occlusion_query,
GL_ARB_point_parameters, GL_ARB_point_sprite, GL_ARB_shadow,
GL_ARB_shadow_ambient, GL_ARB_texture_border_clamp,
GL_ARB_texture_compression, GL_ARB_texture_cube_map,
GL_ARB_texture_env_add, GL_ARB_texture_env_combine,
GL_ARB_texture_env_crossbar, GL_ARB_texture_env_dot3,
GL_ARB_texture_filter_anisotropic, GL_ARB_texture_mirrored_repeat,
GL_ARB_texture_non_power_of_two, GL_ARB_texture_rectangle,
GL_ARB_transpose_matrix, GL_ARB_vertex_program, GL_ARB_window_pos,
GL_ATIX_texture_env_combine3, GL_ATI_draw_buffers,
GL_ATI_texture_env_combine3, GL_EXT_abgr, GL_EXT_bgra,
GL_EXT_blend_color, GL_EXT_blend_equation_separate,
GL_EXT_blend_func_separate, GL_EXT_blend_minmax, GL_EXT_blend_subtract,
GL_EXT_clip_volume_hint, GL_EXT_draw_range_elements, GL_EXT_fog_coord,
GL_EXT_framebuffer_object, GL_EXT_multi_draw_arrays,
GL_EXT_point_parameters, GL_EXT_rescale_normal, GL_EXT_secondary_color,
GL_EXT_separate_specular_color, GL_EXT_shadow_funcs,
GL_EXT_stencil_two_side, GL_EXT_stencil_wrap,
GL_EXT_texture_compression_dxt1, GL_EXT_texture_compression_s3tc,
GL_EXT_texture_edge_clamp, GL_EXT_texture_env_add,
GL_EXT_texture_filter_anisotropic, GL_EXT_texture_lod_bias,
GL_EXT_texture_rectangle, GL_IBM_texture_mirrored_repeat,
GL_INGR_blend_func_separate, GL_NV_blend_square, GL_NV_depth_clamp,
GL_NV_fog_distance, GL_NV_fragment_program2,
GL_NV_fragment_program_option, GL_NV_light_max_exponent,
GL_NV_texgen_reflection, GL_NV_texture_rectangle,
GL_NV_vertex_program2_option, GL_NV_vertex_program3,
GL_SGIS_generate_mipmap, GL_SGIS_texture_border_clamp,
GL_SGIS_texture_edge_clamp, GL_SGIS_texture_lod, GL_SGIX_shadow_ambient,
GL_SGI_color_matrix, GL_SUN_multi_draw_arrays
64 GLX Visuals
visual x bf lv rg d st colorbuffer sr ax dp st accumbuffer ms cav
id dep cl sp sz l ci b ro r g b a F gb bf th cl r g b a ns b eat
----------------------------------------------------------------------------
0x022 24 tc 0 32 0 r y . 8 8 8 8 . . 0 32 8 0 0 0 0 0 0 None
0x081 24 tc 0 32 0 r . . 8 8 8 8 . . 0 0 0 0 0 0 0 0 0 Slow
0x082 24 tc 0 32 0 r . . 8 8 8 8 . . 0 0 0 0 0 0 0 16 1 Slow
0x083 24 tc 0 32 0 r . . 8 8 8 8 . . 0 32 0 0 0 0 0 0 0 Slow
0x084 24 tc 0 32 0 r . . 8 8 8 8 . . 0 32 0 0 0 0 0 16 1 Slow
0x085 24 tc 0 32 0 r . . 8 8 8 8 . . 0 0 8 0 0 0 0 0 0 Slow
0x086 24 tc 0 32 0 r . . 8 8 8 8 . . 0 0 8 0 0 0 0 16 1 Slow
0x087 24 tc 0 32 0 r . . 8 8 8 8 . . 0 32 8 0 0 0 0 0 0 Slow
0x088 24 tc 0 32 0 r . . 8 8 8 8 . . 0 32 8 0 0 0 0 16 1 Slow
0x089 24 tc 0 32 0 r y . 8 8 8 8 . . 0 0 0 0 0 0 0 0 0 Slow
0x08a 24 tc 0 32 0 r y . 8 8 8 8 . . 0 0 0 0 0 0 0 16 1 Slow
0x08b 24 tc 0 32 0 r y . 8 8 8 8 . . 0 32 0 0 0 0 0 0 0 Slow
0x08c 24 tc 0 32 0 r y . 8 8 8 8 . . 0 32 0 0 0 0 0 16 1 Slow
0x08d 24 tc 0 32 0 r y . 8 8 8 8 . . 0 0 8 0 0 0 0 0 0 Slow
0x08e 24 tc 0 32 0 r y . 8 8 8 8 . . 0 0 8 0 0 0 0 16 1 Slow
0x08f 24 tc 0 32 0 r y . 8 8 8 8 . . 0 32 8 0 0 0 0 0 0 Slow
0x090 24 tc 0 32 0 r y . 8 8 8 8 . . 0 32 8 0 0 0 0 16 1 Slow
0x091 24 tc 0 32 0 r . . 8 8 8 8 . . 4 0 0 0 0 0 0 0 0 Slow
0x092 24 tc 0 32 0 r . . 8 8 8 8 . . 4 0 0 0 0 0 0 16 1 Slow
0x093 24 tc 0 32 0 r . . 8 8 8 8 . . 4 32 0 0 0 0 0 0 0 Slow
0x094 24 tc 0 32 0 r . . 8 8 8 8 . . 4 32 0 0 0 0 0 16 1 Slow
0x095 24 tc 0 32 0 r . . 8 8 8 8 . . 4 0 8 0 0 0 0 0 0 Slow
0x096 24 tc 0 32 0 r . . 8 8 8 8 . . 4 0 8 0 0 0 0 16 1 Slow
0x097 24 tc 0 32 0 r . . 8 8 8 8 . . 4 32 8 0 0 0 0 0 0 Slow
0x098 24 tc 0 32 0 r . . 8 8 8 8 . . 4 32 8 0 0 0 0 16 1 Slow
0x099 24 tc 0 32 0 r y . 8 8 8 8 . . 4 0 0 0 0 0 0 0 0 Slow
0x09a 24 tc 0 32 0 r y . 8 8 8 8 . . 4 0 0 0 0 0 0 16 1 Slow
0x09b 24 tc 0 32 0 r y . 8 8 8 8 . . 4 32 0 0 0 0 0 0 0 Slow
0x09c 24 tc 0 32 0 r y . 8 8 8 8 . . 4 32 0 0 0 0 0 16 1 Slow
0x09d 24 tc 0 32 0 r y . 8 8 8 8 . . 4 0 8 0 0 0 0 0 0 Slow
0x09e 24 tc 0 32 0 r y . 8 8 8 8 . . 4 0 8 0 0 0 0 16 1 Slow
0x09f 24 tc 0 32 0 r y . 8 8 8 8 . . 4 32 8 0 0 0 0 0 0 Slow
0x0a0 24 tc 0 32 0 r y . 8 8 8 8 . . 4 32 8 0 0 0 0 16 1 Slow
0x0a1 24 tc 0 32 0 r . . 8 8 8 8 . . 0 0 0 0 0 0 0 0 0 None
0x0a2 24 tc 0 32 0 r . . 8 8 8 8 . . 0 0 0 0 0 0 0 4 1 None
0x0a3 24 tc 0 32 0 r . . 8 8 8 8 . . 0 32 0 0 0 0 0 0 0 None
0x0a4 24 tc 0 32 0 r . . 8 8 8 8 . . 0 32 0 0 0 0 0 4 1 None
0x0a5 24 tc 0 32 0 r . . 8 8 8 8 . . 0 0 8 0 0 0 0 0 0 None
0x0a6 24 tc 0 32 0 r . . 8 8 8 8 . . 0 0 8 0 0 0 0 4 1 None
0x0a7 24 tc 0 32 0 r . . 8 8 8 8 . . 0 32 8 0 0 0 0 0 0 None
0x0a8 24 tc 0 32 0 r . . 8 8 8 8 . . 0 32 8 0 0 0 0 4 1 None
0x0a9 24 tc 0 32 0 r y . 8 8 8 8 . . 0 0 0 0 0 0 0 0 0 None
0x0aa 24 tc 0 32 0 r y . 8 8 8 8 . . 0 0 0 0 0 0 0 4 1 None
0x0ab 24 tc 0 32 0 r y . 8 8 8 8 . . 0 32 0 0 0 0 0 0 0 None
0x0ac 24 tc 0 32 0 r y . 8 8 8 8 . . 0 32 0 0 0 0 0 4 1 None
0x0ad 24 tc 0 32 0 r y . 8 8 8 8 . . 0 0 8 0 0 0 0 0 0 None
0x0ae 24 tc 0 32 0 r y . 8 8 8 8 . . 0 0 8 0 0 0 0 4 1 None
0x0af 24 tc 0 32 0 r y . 8 8 8 8 . . 0 32 8 0 0 0 0 4 1 None
0x0b0 24 tc 0 32 0 r . . 8 8 8 8 . . 2 0 0 0 0 0 0 0 0 None
0x0b1 24 tc 0 32 0 r . . 8 8 8 8 . . 2 0 0 0 0 0 0 4 1 None
0x0b2 24 tc 0 32 0 r . . 8 8 8 8 . . 2 32 0 0 0 0 0 0 0 None
0x0b3 24 tc 0 32 0 r . . 8 8 8 8 . . 2 32 0 0 0 0 0 4 1 None
0x0b4 24 tc 0 32 0 r . . 8 8 8 8 . . 2 0 8 0 0 0 0 0 0 None
0x0b5 24 tc 0 32 0 r . . 8 8 8 8 . . 2 0 8 0 0 0 0 4 1 None
0x0b6 24 tc 0 32 0 r . . 8 8 8 8 . . 2 32 8 0 0 0 0 0 0 None
0x0b7 24 tc 0 32 0 r . . 8 8 8 8 . . 2 32 8 0 0 0 0 4 1 None
0x0b8 24 tc 0 32 0 r y . 8 8 8 8 . . 2 0 0 0 0 0 0 0 0 None
0x0b9 24 tc 0 32 0 r y . 8 8 8 8 . . 2 0 0 0 0 0 0 4 1 None
0x0ba 24 tc 0 32 0 r y . 8 8 8 8 . . 2 32 0 0 0 0 0 0 0 None
0x0bb 24 tc 0 32 0 r y . 8 8 8 8 . . 2 32 0 0 0 0 0 4 1 None
0x0bc 24 tc 0 32 0 r y . 8 8 8 8 . . 2 0 8 0 0 0 0 0 0 None
0x0bd 24 tc 0 32 0 r y . 8 8 8 8 . . 2 0 8 0 0 0 0 4 1 None
0x0be 24 tc 0 32 0 r y . 8 8 8 8 . . 2 32 8 0 0 0 0 0 0 None
0x0bf 24 tc 0 32 0 r y . 8 8 8 8 . . 2 32 8 0 0 0 0 4 1 None
64 GLXFBConfigs:
visual x bf lv rg d st colorbuffer sr ax dp st accumbuffer ms cav
id dep cl sp sz l ci b ro r g b a F gb bf th cl r g b a ns b eat
----------------------------------------------------------------------------
0x041 24 tc 0 32 0 r . . 8 8 8 8 . . 0 0 0 0 0 0 0 0 0 Slow
0x042 24 tc 0 32 0 r . . 8 8 8 8 . . 0 0 0 0 0 0 0 16 1 Slow
0x043 24 tc 0 32 0 r . . 8 8 8 8 . . 0 32 0 0 0 0 0 0 0 Slow
0x044 24 tc 0 32 0 r . . 8 8 8 8 . . 0 32 0 0 0 0 0 16 1 Slow
0x045 24 tc 0 32 0 r . . 8 8 8 8 . . 0 0 8 0 0 0 0 0 0 Slow
0x046 24 tc 0 32 0 r . . 8 8 8 8 . . 0 0 8 0 0 0 0 16 1 Slow
0x047 24 tc 0 32 0 r . . 8 8 8 8 . . 0 32 8 0 0 0 0 0 0 Slow
0x048 24 tc 0 32 0 r . . 8 8 8 8 . . 0 32 8 0 0 0 0 16 1 Slow
0x049 24 tc 0 32 0 r y . 8 8 8 8 . . 0 0 0 0 0 0 0 0 0 Slow
0x04a 24 tc 0 32 0 r y . 8 8 8 8 . . 0 0 0 0 0 0 0 16 1 Slow
0x04b 24 tc 0 32 0 r y . 8 8 8 8 . . 0 32 0 0 0 0 0 0 0 Slow
0x04c 24 tc 0 32 0 r y . 8 8 8 8 . . 0 32 0 0 0 0 0 16 1 Slow
0x04d 24 tc 0 32 0 r y . 8 8 8 8 . . 0 0 8 0 0 0 0 0 0 Slow
0x04e 24 tc 0 32 0 r y . 8 8 8 8 . . 0 0 8 0 0 0 0 16 1 Slow
0x04f 24 tc 0 32 0 r y . 8 8 8 8 . . 0 32 8 0 0 0 0 0 0 Slow
0x050 24 tc 0 32 0 r y . 8 8 8 8 . . 0 32 8 0 0 0 0 16 1 Slow
0x051 24 tc 0 32 0 r . . 8 8 8 8 . . 4 0 0 0 0 0 0 0 0 Slow
0x052 24 tc 0 32 0 r . . 8 8 8 8 . . 4 0 0 0 0 0 0 16 1 Slow
0x053 24 tc 0 32 0 r . . 8 8 8 8 . . 4 32 0 0 0 0 0 0 0 Slow
0x054 24 tc 0 32 0 r . . 8 8 8 8 . . 4 32 0 0 0 0 0 16 1 Slow
0x055 24 tc 0 32 0 r . . 8 8 8 8 . . 4 0 8 0 0 0 0 0 0 Slow
0x056 24 tc 0 32 0 r . . 8 8 8 8 . . 4 0 8 0 0 0 0 16 1 Slow
0x057 24 tc 0 32 0 r . . 8 8 8 8 . . 4 32 8 0 0 0 0 0 0 Slow
0x058 24 tc 0 32 0 r . . 8 8 8 8 . . 4 32 8 0 0 0 0 16 1 Slow
0x059 24 tc 0 32 0 r y . 8 8 8 8 . . 4 0 0 0 0 0 0 0 0 Slow
0x05a 24 tc 0 32 0 r y . 8 8 8 8 . . 4 0 0 0 0 0 0 16 1 Slow
0x05b 24 tc 0 32 0 r y . 8 8 8 8 . . 4 32 0 0 0 0 0 0 0 Slow
0x05c 24 tc 0 32 0 r y . 8 8 8 8 . . 4 32 0 0 0 0 0 16 1 Slow
0x05d 24 tc 0 32 0 r y . 8 8 8 8 . . 4 0 8 0 0 0 0 0 0 Slow
0x05e 24 tc 0 32 0 r y . 8 8 8 8 . . 4 0 8 0 0 0 0 16 1 Slow
0x05f 24 tc 0 32 0 r y . 8 8 8 8 . . 4 32 8 0 0 0 0 0 0 Slow
0x060 24 tc 0 32 0 r y . 8 8 8 8 . . 4 32 8 0 0 0 0 16 1 Slow
0x061 24 tc 0 32 0 r . . 8 8 8 8 . . 0 0 0 0 0 0 0 0 0 None
0x062 24 tc 0 32 0 r . . 8 8 8 8 . . 0 0 0 0 0 0 0 4 1 None
0x063 24 tc 0 32 0 r . . 8 8 8 8 . . 0 32 0 0 0 0 0 0 0 None
0x064 24 tc 0 32 0 r . . 8 8 8 8 . . 0 32 0 0 0 0 0 4 1 None
0x065 24 tc 0 32 0 r . . 8 8 8 8 . . 0 0 8 0 0 0 0 0 0 None
0x066 24 tc 0 32 0 r . . 8 8 8 8 . . 0 0 8 0 0 0 0 4 1 None
0x067 24 tc 0 32 0 r . . 8 8 8 8 . . 0 32 8 0 0 0 0 0 0 None
0x068 24 tc 0 32 0 r . . 8 8 8 8 . . 0 32 8 0 0 0 0 4 1 None
0x069 24 tc 0 32 0 r y . 8 8 8 8 . . 0 0 0 0 0 0 0 0 0 None
0x06a 24 tc 0 32 0 r y . 8 8 8 8 . . 0 0 0 0 0 0 0 4 1 None
0x06b 24 tc 0 32 0 r y . 8 8 8 8 . . 0 32 0 0 0 0 0 0 0 None
0x06c 24 tc 0 32 0 r y . 8 8 8 8 . . 0 32 0 0 0 0 0 4 1 None
0x06d 24 tc 0 32 0 r y . 8 8 8 8 . . 0 0 8 0 0 0 0 0 0 None
0x06e 24 tc 0 32 0 r y . 8 8 8 8 . . 0 0 8 0 0 0 0 4 1 None
0x06f 24 tc 0 32 0 r y . 8 8 8 8 . . 0 32 8 0 0 0 0 0 0 None
0x070 24 tc 0 32 0 r y . 8 8 8 8 . . 0 32 8 0 0 0 0 4 1 None
0x071 24 tc 0 32 0 r . . 8 8 8 8 . . 2 0 0 0 0 0 0 0 0 None
0x072 24 tc 0 32 0 r . . 8 8 8 8 . . 2 0 0 0 0 0 0 4 1 None
0x073 24 tc 0 32 0 r . . 8 8 8 8 . . 2 32 0 0 0 0 0 0 0 None
0x074 24 tc 0 32 0 r . . 8 8 8 8 . . 2 32 0 0 0 0 0 4 1 None
0x075 24 tc 0 32 0 r . . 8 8 8 8 . . 2 0 8 0 0 0 0 0 0 None
0x076 24 tc 0 32 0 r . . 8 8 8 8 . . 2 0 8 0 0 0 0 4 1 None
0x077 24 tc 0 32 0 r . . 8 8 8 8 . . 2 32 8 0 0 0 0 0 0 None
0x078 24 tc 0 32 0 r . . 8 8 8 8 . . 2 32 8 0 0 0 0 4 1 None
0x079 24 tc 0 32 0 r y . 8 8 8 8 . . 2 0 0 0 0 0 0 0 0 None
0x07a 24 tc 0 32 0 r y . 8 8 8 8 . . 2 0 0 0 0 0 0 4 1 None
0x07b 24 tc 0 32 0 r y . 8 8 8 8 . . 2 32 0 0 0 0 0 0 0 None
0x07c 24 tc 0 32 0 r y . 8 8 8 8 . . 2 32 0 0 0 0 0 4 1 None
0x07d 24 tc 0 32 0 r y . 8 8 8 8 . . 2 0 8 0 0 0 0 0 0 None
0x07e 24 tc 0 32 0 r y . 8 8 8 8 . . 2 0 8 0 0 0 0 4 1 None
0x07f 24 tc 0 32 0 r y . 8 8 8 8 . . 2 32 8 0 0 0 0 0 0 None
0x080 24 tc 0 32 0 r y . 8 8 8 8 . . 2 32 8 0 0 0 0 4 1 None
On the cluster where glxgears does NOT work
glxgears gives the following messages
Error: couldn't get an RGB, Double-buffered visual
glxinfo gives the following messages (by the way, export LIBGL_DEBUG=verbose does not provide any more information)
name of display: localhost:29.0
display: localhost:29 screen: 0
direct rendering: No (If you want to find out why, try setting LIBGL_DEBUG=verbose)
server glx vendor string: SGI
server glx version string: 1.4
server glx extensions:
GLX_ARB_multisample, GLX_EXT_import_context, GLX_EXT_visual_info,
GLX_EXT_visual_rating, GLX_OML_swap_method, GLX_SGIS_multisample,
GLX_SGIX_fbconfig, GLX_SGIX_pbuffer, GLX_SGIX_visual_select_group,
GLX_SGI_make_current_read
client glx vendor string: NVIDIA Corporation
client glx version string: 1.4
client glx extensions:
GLX_ARB_context_flush_control, GLX_ARB_create_context,
GLX_ARB_create_context_no_error, GLX_ARB_create_context_profile,
GLX_ARB_create_context_robustness, GLX_ARB_fbconfig_float,
GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_buffer_age,
GLX_EXT_create_context_es2_profile, GLX_EXT_create_context_es_profile,
GLX_EXT_fbconfig_packed_float, GLX_EXT_framebuffer_sRGB,
GLX_EXT_import_context, GLX_EXT_stereo_tree, GLX_EXT_swap_control,
GLX_EXT_swap_control_tear, GLX_EXT_texture_from_pixmap,
GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_NV_copy_buffer,
GLX_NV_copy_image, GLX_NV_delay_before_swap, GLX_NV_float_buffer,
GLX_NV_multigpu_context, GLX_NV_multisample_coverage,
GLX_NV_robustness_video_memory_purge, GLX_NV_swap_group,
GLX_SGIX_fbconfig, GLX_SGIX_pbuffer, GLX_SGI_swap_control,
GLX_SGI_video_sync
GLX version: 1.4
GLX extensions:
GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_import_context,
GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_SGIX_fbconfig,
GLX_SGIX_pbuffer
OpenGL vendor string: Apple
OpenGL renderer string: Apple M1 Pro
OpenGL version string: 1.4 (2.1 Metal - 76.1)
OpenGL extensions:
GL_ARB_depth_texture, GL_ARB_draw_buffers, GL_ARB_fragment_program,
GL_ARB_fragment_program_shadow, GL_ARB_imaging, GL_ARB_multisample,
GL_ARB_multitexture, GL_ARB_occlusion_query, GL_ARB_point_parameters,
GL_ARB_point_sprite, GL_ARB_shadow, GL_ARB_texture_border_clamp,
GL_ARB_texture_compression, GL_ARB_texture_cube_map,
GL_ARB_texture_env_add, GL_ARB_texture_env_combine,
GL_ARB_texture_env_crossbar, GL_ARB_texture_env_dot3,
GL_ARB_texture_mirrored_repeat, GL_ARB_texture_non_power_of_two,
GL_ARB_transpose_matrix, GL_ARB_vertex_program, GL_ARB_window_pos,
GL_EXT_abgr, GL_EXT_bgra, GL_EXT_blend_color,
GL_EXT_blend_equation_separate, GL_EXT_blend_func_separate,
GL_EXT_blend_minmax, GL_EXT_blend_subtract, GL_EXT_draw_range_elements,
GL_EXT_fog_coord, GL_EXT_framebuffer_object, GL_EXT_multi_draw_arrays,
GL_EXT_rescale_normal, GL_EXT_secondary_color,
GL_EXT_separate_specular_color, GL_EXT_shadow_funcs,
GL_EXT_stencil_two_side, GL_EXT_stencil_wrap,
GL_EXT_texture_compression_dxt1, GL_EXT_texture_compression_s3tc,
GL_EXT_texture_env_add, GL_EXT_texture_filter_anisotropic,
GL_EXT_texture_lod_bias, GL_NV_blend_square, GL_NV_depth_clamp,
GL_NV_fog_distance, GL_NV_fragment_program2,
GL_NV_fragment_program_option, GL_NV_light_max_exponent,
GL_NV_texgen_reflection, GL_NV_vertex_program2_option,
GL_NV_vertex_program3, GL_SGIS_generate_mipmap, GL_SGIS_texture_lod
64 GLX Visuals
visual x bf lv rg d st colorbuffer sr ax dp st accumbuffer ms cav
id dep cl sp sz l ci b ro r g b a F gb bf th cl r g b a ns b eat
----------------------------------------------------------------------------
0x022 24 tc 0 32 0 r y . 8 8 8 8 . . 0 32 8 0 0 0 0 0 0 None
0x081 24 tc 0 32 0 r . . 8 8 8 8 . . 0 0 0 0 0 0 0 0 0 Slow
0x082 24 tc 0 32 0 r . . 8 8 8 8 . . 0 0 0 0 0 0 0 16 1 Slow
0x083 24 tc 0 32 0 r . . 8 8 8 8 . . 0 32 0 0 0 0 0 0 0 Slow
0x084 24 tc 0 32 0 r . . 8 8 8 8 . . 0 32 0 0 0 0 0 16 1 Slow
0x085 24 tc 0 32 0 r . . 8 8 8 8 . . 0 0 8 0 0 0 0 0 0 Slow
0x086 24 tc 0 32 0 r . . 8 8 8 8 . . 0 0 8 0 0 0 0 16 1 Slow
0x087 24 tc 0 32 0 r . . 8 8 8 8 . . 0 32 8 0 0 0 0 0 0 Slow
0x088 24 tc 0 32 0 r . . 8 8 8 8 . . 0 32 8 0 0 0 0 16 1 Slow
0x089 24 tc 0 32 0 r y . 8 8 8 8 . . 0 0 0 0 0 0 0 0 0 Slow
0x08a 24 tc 0 32 0 r y . 8 8 8 8 . . 0 0 0 0 0 0 0 16 1 Slow
0x08b 24 tc 0 32 0 r y . 8 8 8 8 . . 0 32 0 0 0 0 0 0 0 Slow
0x08c 24 tc 0 32 0 r y . 8 8 8 8 . . 0 32 0 0 0 0 0 16 1 Slow
0x08d 24 tc 0 32 0 r y . 8 8 8 8 . . 0 0 8 0 0 0 0 0 0 Slow
0x08e 24 tc 0 32 0 r y . 8 8 8 8 . . 0 0 8 0 0 0 0 16 1 Slow
0x08f 24 tc 0 32 0 r y . 8 8 8 8 . . 0 32 8 0 0 0 0 0 0 Slow
0x090 24 tc 0 32 0 r y . 8 8 8 8 . . 0 32 8 0 0 0 0 16 1 Slow
0x091 24 tc 0 32 0 r . . 8 8 8 8 . . 4 0 0 0 0 0 0 0 0 Slow
0x092 24 tc 0 32 0 r . . 8 8 8 8 . . 4 0 0 0 0 0 0 16 1 Slow
0x093 24 tc 0 32 0 r . . 8 8 8 8 . . 4 32 0 0 0 0 0 0 0 Slow
0x094 24 tc 0 32 0 r . . 8 8 8 8 . . 4 32 0 0 0 0 0 16 1 Slow
0x095 24 tc 0 32 0 r . . 8 8 8 8 . . 4 0 8 0 0 0 0 0 0 Slow
0x096 24 tc 0 32 0 r . . 8 8 8 8 . . 4 0 8 0 0 0 0 16 1 Slow
0x097 24 tc 0 32 0 r . . 8 8 8 8 . . 4 32 8 0 0 0 0 0 0 Slow
0x098 24 tc 0 32 0 r . . 8 8 8 8 . . 4 32 8 0 0 0 0 16 1 Slow
0x099 24 tc 0 32 0 r y . 8 8 8 8 . . 4 0 0 0 0 0 0 0 0 Slow
0x09a 24 tc 0 32 0 r y . 8 8 8 8 . . 4 0 0 0 0 0 0 16 1 Slow
0x09b 24 tc 0 32 0 r y . 8 8 8 8 . . 4 32 0 0 0 0 0 0 0 Slow
0x09c 24 tc 0 32 0 r y . 8 8 8 8 . . 4 32 0 0 0 0 0 16 1 Slow
0x09d 24 tc 0 32 0 r y . 8 8 8 8 . . 4 0 8 0 0 0 0 0 0 Slow
0x09e 24 tc 0 32 0 r y . 8 8 8 8 . . 4 0 8 0 0 0 0 16 1 Slow
0x09f 24 tc 0 32 0 r y . 8 8 8 8 . . 4 32 8 0 0 0 0 0 0 Slow
0x0a0 24 tc 0 32 0 r y . 8 8 8 8 . . 4 32 8 0 0 0 0 16 1 Slow
0x0a1 24 tc 0 32 0 r . . 8 8 8 8 . . 0 0 0 0 0 0 0 0 0 None
0x0a2 24 tc 0 32 0 r . . 8 8 8 8 . . 0 0 0 0 0 0 0 4 1 None
0x0a3 24 tc 0 32 0 r . . 8 8 8 8 . . 0 32 0 0 0 0 0 0 0 None
0x0a4 24 tc 0 32 0 r . . 8 8 8 8 . . 0 32 0 0 0 0 0 4 1 None
0x0a5 24 tc 0 32 0 r . . 8 8 8 8 . . 0 0 8 0 0 0 0 0 0 None
0x0a6 24 tc 0 32 0 r . . 8 8 8 8 . . 0 0 8 0 0 0 0 4 1 None
0x0a7 24 tc 0 32 0 r . . 8 8 8 8 . . 0 32 8 0 0 0 0 0 0 None
0x0a8 24 tc 0 32 0 r . . 8 8 8 8 . . 0 32 8 0 0 0 0 4 1 None
0x0a9 24 tc 0 32 0 r y . 8 8 8 8 . . 0 0 0 0 0 0 0 0 0 None
0x0aa 24 tc 0 32 0 r y . 8 8 8 8 . . 0 0 0 0 0 0 0 4 1 None
0x0ab 24 tc 0 32 0 r y . 8 8 8 8 . . 0 32 0 0 0 0 0 0 0 None
0x0ac 24 tc 0 32 0 r y . 8 8 8 8 . . 0 32 0 0 0 0 0 4 1 None
0x0ad 24 tc 0 32 0 r y . 8 8 8 8 . . 0 0 8 0 0 0 0 0 0 None
0x0ae 24 tc 0 32 0 r y . 8 8 8 8 . . 0 0 8 0 0 0 0 4 1 None
0x0af 24 tc 0 32 0 r y . 8 8 8 8 . . 0 32 8 0 0 0 0 4 1 None
0x0b0 24 tc 0 32 0 r . . 8 8 8 8 . . 2 0 0 0 0 0 0 0 0 None
0x0b1 24 tc 0 32 0 r . . 8 8 8 8 . . 2 0 0 0 0 0 0 4 1 None
0x0b2 24 tc 0 32 0 r . . 8 8 8 8 . . 2 32 0 0 0 0 0 0 0 None
0x0b3 24 tc 0 32 0 r . . 8 8 8 8 . . 2 32 0 0 0 0 0 4 1 None
0x0b4 24 tc 0 32 0 r . . 8 8 8 8 . . 2 0 8 0 0 0 0 0 0 None
0x0b5 24 tc 0 32 0 r . . 8 8 8 8 . . 2 0 8 0 0 0 0 4 1 None
0x0b6 24 tc 0 32 0 r . . 8 8 8 8 . . 2 32 8 0 0 0 0 0 0 None
0x0b7 24 tc 0 32 0 r . . 8 8 8 8 . . 2 32 8 0 0 0 0 4 1 None
0x0b8 24 tc 0 32 0 r y . 8 8 8 8 . . 2 0 0 0 0 0 0 0 0 None
0x0b9 24 tc 0 32 0 r y . 8 8 8 8 . . 2 0 0 0 0 0 0 4 1 None
0x0ba 24 tc 0 32 0 r y . 8 8 8 8 . . 2 32 0 0 0 0 0 0 0 None
0x0bb 24 tc 0 32 0 r y . 8 8 8 8 . . 2 32 0 0 0 0 0 4 1 None
0x0bc 24 tc 0 32 0 r y . 8 8 8 8 . . 2 0 8 0 0 0 0 0 0 None
0x0bd 24 tc 0 32 0 r y . 8 8 8 8 . . 2 0 8 0 0 0 0 4 1 None
0x0be 24 tc 0 32 0 r y . 8 8 8 8 . . 2 32 8 0 0 0 0 0 0 None
0x0bf 24 tc 0 32 0 r y . 8 8 8 8 . . 2 32 8 0 0 0 0 4 1 None
32 GLXFBConfigs:
visual x bf lv rg d st colorbuffer sr ax dp st accumbuffer ms cav
id dep cl sp sz l ci b ro r g b a F gb bf th cl r g b a ns b eat
----------------------------------------------------------------------------
0x041 24 tc 0 32 0 r . . 8 8 8 8 . . 0 0 0 0 0 0 0 0 0 Slow
0x042 24 tc 0 32 0 r . . 8 8 8 8 . . 0 0 0 0 0 0 0 16 1 Slow
0x045 24 tc 0 32 0 r . . 8 8 8 8 . . 0 0 8 0 0 0 0 0 0 Slow
0x046 24 tc 0 32 0 r . . 8 8 8 8 . . 0 0 8 0 0 0 0 16 1 Slow
0x049 24 tc 0 32 0 r y . 8 8 8 8 . . 0 0 0 0 0 0 0 0 0 Slow
0x04a 24 tc 0 32 0 r y . 8 8 8 8 . . 0 0 0 0 0 0 0 16 1 Slow
0x04d 24 tc 0 32 0 r y . 8 8 8 8 . . 0 0 8 0 0 0 0 0 0 Slow
0x04e 24 tc 0 32 0 r y . 8 8 8 8 . . 0 0 8 0 0 0 0 16 1 Slow
0x051 24 tc 0 32 0 r . . 8 8 8 8 . . 4 0 0 0 0 0 0 0 0 Slow
0x052 24 tc 0 32 0 r . . 8 8 8 8 . . 4 0 0 0 0 0 0 16 1 Slow
0x055 24 tc 0 32 0 r . . 8 8 8 8 . . 4 0 8 0 0 0 0 0 0 Slow
0x056 24 tc 0 32 0 r . . 8 8 8 8 . . 4 0 8 0 0 0 0 16 1 Slow
0x059 24 tc 0 32 0 r y . 8 8 8 8 . . 4 0 0 0 0 0 0 0 0 Slow
0x05a 24 tc 0 32 0 r y . 8 8 8 8 . . 4 0 0 0 0 0 0 16 1 Slow
0x05d 24 tc 0 32 0 r y . 8 8 8 8 . . 4 0 8 0 0 0 0 0 0 Slow
0x05e 24 tc 0 32 0 r y . 8 8 8 8 . . 4 0 8 0 0 0 0 16 1 Slow
0x061 24 tc 0 32 0 r . . 8 8 8 8 . . 0 0 0 0 0 0 0 0 0 None
0x062 24 tc 0 32 0 r . . 8 8 8 8 . . 0 0 0 0 0 0 0 4 1 None
0x065 24 tc 0 32 0 r . . 8 8 8 8 . . 0 0 8 0 0 0 0 0 0 None
0x066 24 tc 0 32 0 r . . 8 8 8 8 . . 0 0 8 0 0 0 0 4 1 None
0x069 24 tc 0 32 0 r y . 8 8 8 8 . . 0 0 0 0 0 0 0 0 0 None
0x06a 24 tc 0 32 0 r y . 8 8 8 8 . . 0 0 0 0 0 0 0 4 1 None
0x06d 24 tc 0 32 0 r y . 8 8 8 8 . . 0 0 8 0 0 0 0 0 0 None
0x06e 24 tc 0 32 0 r y . 8 8 8 8 . . 0 0 8 0 0 0 0 4 1 None
0x071 24 tc 0 32 0 r . . 8 8 8 8 . . 2 0 0 0 0 0 0 0 0 None
0x072 24 tc 0 32 0 r . . 8 8 8 8 . . 2 0 0 0 0 0 0 4 1 None
0x075 24 tc 0 32 0 r . . 8 8 8 8 . . 2 0 8 0 0 0 0 0 0 None
0x076 24 tc 0 32 0 r . . 8 8 8 8 . . 2 0 8 0 0 0 0 4 1 None
0x079 24 tc 0 32 0 r y . 8 8 8 8 . . 2 0 0 0 0 0 0 0 0 None
0x07a 24 tc 0 32 0 r y . 8 8 8 8 . . 2 0 0 0 0 0 0 4 1 None
0x07d 24 tc 0 32 0 r y . 8 8 8 8 . . 2 0 8 0 0 0 0 0 0 None
0x07e 24 tc 0 32 0 r y . 8 8 8 8 . . 2 0 8 0 0 0 0 4 1 None
|
We encountered this issue recently and solved it by running export __GLX_VENDOR_LIBRARY_NAME=mesa in the ssh session. For some reason it was loading the nvidia opengl driver despite the mac not having any nvidia hardware. There's probably some way to fix this on the mac instead, but temporary solutions are permanent.
| glxgears gives Error: couldn't get an RGB, Double-buffered visual on one remote server but not on another |
1,485,111,614,000 |
I'm trying to install mesa-common-dev on Ubuntu LTS:
sudo apt-get install mesa-common-dev
however, the system returns:
The following packages have unmet dependencies:
mesa-common-dev : Depends: libgl-dev but it is not going to be installed
Depends: libglx-dev but it is not going to be installed
Depends: libglx-dev but it is not going to be installed
E: Unable to correct problems, you have held broken packages.
apt-cache policy mesa-common-dev
mesa-common-dev:
Installed: (none)
Candidate: 20.0.4-2ubuntu1
Version table:
20.0.4-2ubuntu1 500
500 http://br.archive.ubuntu.com/ubuntu focal/main amd64 Packages
apt-cache policy libgl-dev
libgl-dev:
Installed: 1.3.2-1~ubuntu0.20.04.1
Candidate: 1.3.2-1~ubuntu0.20.04.1
Version table:
*** 1.3.2-1~ubuntu0.20.04.1 100
100 /var/lib/dpkg/status
1.3.1-1 500
500 http://br.archive.ubuntu.com/ubuntu focal/main amd64 Packages
apt-cache policy libglx-dev
libglx-dev:
Installed: 1.3.2-1~ubuntu0.20.04.1
Candidate: 1.3.2-1~ubuntu0.20.04.1
Version table:
*** 1.3.2-1~ubuntu0.20.04.1 100
100 /var/lib/dpkg/status
1.3.1-1 500
500 http://br.archive.ubuntu.com/ubuntu focal/main amd64 Packages
apt-cache policy libdrm-dev
libdrm-dev:
Installed: (nenhum)
Candidate: 2.4.101-2
Version table:
2.4.101-2 500
500 http://br.archive.ubuntu.com/ubuntu focal/main amd64 Packages
Thank you so much
grep -Rn --include=*.list ^[^#] /etc/apt/
/etc/apt/sources.list:5:deb http://br.archive.ubuntu.com/ubuntu/ focal main restricted
/etc/apt/sources.list:15:deb http://br.archive.ubuntu.com/ubuntu/ focal universe
/etc/apt/sources.list:24:deb http://br.archive.ubuntu.com/ubuntu/ focal multiverse
/etc/apt/sources.list:42:deb http://security.ubuntu.com/ubuntu focal-security main restricted
/etc/apt/sources.list:44:deb http://security.ubuntu.com/ubuntu focal-security universe
/etc/apt/sources.list:46:deb http://security.ubuntu.com/ubuntu focal-security multiverse
/etc/apt/sources.list.d/google-chrome.list:3:deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main
/etc/apt/sources.list.d/opera-stable.list:4:deb https://deb.opera.com/opera-stable/ stable non-free #Opera Browser (final releases)
/etc/apt/sources.list.d/sublime-text.list:1:deb https://download.sublimetext.com/ apt/stable/
/etc/apt/sources.list.d/microsoft-edge-beta.list:3:deb [arch=amd64] http://packages.microsoft.com/repos/edge/ stable main
SUCCESS!!
I add:
deb http://archive.ubuntu.com/ubuntu focal-updates main restricted
universe multiverse
in the source.list file.
thanks for helping me
|
focal-updates is missing in your /etc/apt/sources.list, to correct the problem:
echo "deb http://archive.ubuntu.com/ubuntu focal-updates main restricted universe multiverse" |\
sudo tee -a /etc/apt/sources.list
sudo apt update
sudo apt install mesa-common-dev
| mesa-common-dev: unmet dependencies |
1,485,111,614,000 |
I'm trying to install 32 bit nVidia drivers on my 64-bit system (to get wine working with OpenGL). So I tried:
root@grzes:/lib# aptitude install libgl1-nvidia-glx:i386 libxvmcnvidia1:i386
The following NEW packages will be installed:
libgl1-nvidia-glx:i386 libxvmc1:i386{ab} libxvmcnvidia1:i386
0 packages upgraded, 3 newly installed, 0 to remove and 2 not upgraded.
Need to get 6,661 kB of archives. After unpacking 32.2 MB will be used.
The following packages have unmet dependencies:
libxvmc1 : Conflicts: libxvmc1:i386 but 2:1.0.7-1+deb7u2 is to be installed.
libxvmc1:i386 : Conflicts: libxvmc1 but 2:1.0.7-1+deb7u2 is installed.
The following actions will resolve these dependencies:
Remove the following packages:
1) kaffeine
2) kplayer
3) libxine1-x
4) libxine2-x
5) libxvmc1
6) libxvmcnvidia1
7) mencoder
8) mplayer
9) mplayerthumbs
10) nvidia-glx
11) smplayer
12) smplayer-themes
13) smplayer-translations
14) task-desktop
15) task-gnome-desktop
16) xine-ui
17) xserver-xorg-video-all
18) xserver-xorg-video-intel
19) xserver-xorg-video-openchrome
Leave the following dependencies unresolved:
20) digikam recommends mplayerthumbs
21) libgl1-nvidia-glx recommends libxvmcnvidia1
22) nvidia-kernel-dkms recommends nvidia-glx (>= 304.88)
23) youtube-dl recommends mplayer2 | mplayer
Accept this solution? [Y/n/q/?] q
Abandoning all efforts to resolve these dependencies.
Abort.
However, as you can see there is a conflict. How can I make it work?
|
The info on the internet is conflicting on this topic so here are 2 leads that I found. I do not have multiarch nor Debian but am still trying to assist anyway.
Area to investigate #1 - Wine
I think you want to install the 32-bit NVIDIA drivers inside of Wine. I found this thread, it's on a FreeBSD forum but is still applicable:
excerpt: http://forums.freebsd.org/showthread.php?t=26597
3D acceleration is working with the 64bit nvidia driver provided that you install the 32bit version (same version number) into the chroot (tested with World of Warcraft, 8.0-RELEASE).
Area to investigate #2 - nvidia-glx
I found this thread on a crunchbang forum, but should still apply. Thread's titled: Index» Help & Support (Testing/Unstable)» NVIDIA Drivers on x86_64 not installing x86 32bit OpenGL library.
That thread suggested the installation of this package:
$ sudo apt-get install libgl1-nvidia-glx:i386
| Having nVidia OpenGL 32bit driver on a 64bit Debian system in multiarch |
1,485,111,614,000 |
I run FC 24 (just upgraded from FC 23).
After the upgrade there were some issues with the X server, and so I decided to change from Nvidia proprietary drivers to Nouveau. Everything seems OK, except that I can't get GLX to work. For
glxinfo
I get:
name of display: :0.0
Xlib: extension "GLX" missing on display ":0.0".
Xlib: extension "GLX" missing on display ":0.0".
Xlib: extension "GLX" missing on display ":0.0".
A bunch of times, and then
Error: couldn't find RGB GLX visual or fbconfig
For errors in Xorg.0.log, specifically for
less /var/log/Xorg.0.log |grep EE
I get:
(WW) warning, (EE) error, (NI) not implemented, (??) unknown.
[ 86.925] (EE) Failed to load module "nv" (module does not exist, 0)
[ 93.381] (EE) AIGLX error: dlopen of /usr/lib64/dri/nouveau_dri.so failed (/usr/lib64/dri/nouveau_dri.so: undefined symbol: _glapi_check_multithread)
[ 93.381] (EE) AIGLX: reverting to software rendering
[ 93.389] (EE) AIGLX error: dlopen of /usr/lib64/dri/swrast_dri.so failed (/usr/lib64/dri/swrast_dri.so: undefined symbol: _glapi_check_multithread)
[ 93.389] (EE) GLX: could not load software renderer
In Xorg.1.log, there is:
(WW) warning, (EE) error, (NI) not implemented, (??) unknown.
[ 246.220] (EE) module ABI major version (6) doesn't match the server's version (9)
[ 246.220] (EE) Failed to load module "glx" (module requirement mismatch, 0)
[ 246.221] (EE) Failed to load module "nv" (module does not exist, 0)
I do have mesa-libGL installed.
Any ideas?
P.S.: As a sideshow, there also is something somewhere that still calls the 'nv' module instead of nouveau. But given that there is no xorg.conf anymore, but it's all de-centralized in xorg.conf.d, I can't figure out where that is.
|
Poked around on Rpmfusion and found a few more steps to take, to remove garbage left behind by the NVIDIA installer.
https://rpmfusion.org/Howto/nVidia#Recoverfromnvidia_installer
Namely:
rm -f /usr/lib{,64}/libGL.so.* /usr/lib{,64}/libEGL.so.*
rm -f /usr/lib{,64}/xorg/modules/extensions/libglx.so
dnf reinstall xorg-x11-server-Xorg mesa-libGL mesa-libEGL
mv /etc/X11/xorg.conf /etc/X11/xorg.conf.saved
The last line was probably not necessary in my case, but the others likely were.
Seems to work now (for example glxgears shows spinning gears).
| Cannot get glx to work after changing from Nvidia to Nouveau drivers FC24 |
1,485,111,614,000 |
I'd like to do OpenGL rendering from an application that talks to an X11 server. The application reads the value of the DISPLAY variable.
I have access to a CentOS 7 box that has a nice graphics card capable of doing 3D rendering, but I don't have a monitor plugged into it.
When I run xstart, to start the X11 server, I get the following error:
Fatal server error:
(EE) no screens found(EE)
How do I start an X11 server for rendering on a graphics card, without a physical display?
This box sits in a server room, so I can't plug a physical display into it.
Also, xvfb or software renderers are perhaps not useful for this task, because it does not handle instructions needed for rendering. I would need to use the graphics adapter.
Here are the graphics adapters available to me:
# lspci | egrep 'VGA|3D'
04:00.0 VGA compatible controller: NVIDIA Corporation GP104 [GeForce GTX 1080] (rev a1)
0a:00.0 VGA compatible controller: Matrox Electronics Systems Ltd. G200eR2 (rev 01)
Here is my xorg.conf file:
# more /etc/X11/xorg.conf
# nvidia-xconfig: X configuration file generated by nvidia-xconfig
# nvidia-xconfig: version 375.20 (buildmeister@swio-display-x86-rhel47-06) Tue Nov 15 17:49:44 PST 2016
Section "ServerLayout"
Identifier "Layout0"
Screen 0 "Screen0"
InputDevice "Keyboard0" "CoreKeyboard"
InputDevice "Mouse0" "CorePointer"
EndSection
Section "Files"
FontPath "/usr/share/fonts/default/Type1"
EndSection
Section "InputDevice"
# generated from default
Identifier "Mouse0"
Driver "mouse"
Option "Protocol" "auto"
Option "Device" "/dev/input/mice"
Option "Emulate3Buttons" "no"
Option "ZAxisMapping" "4 5"
EndSection
Section "InputDevice"
# generated from default
Identifier "Keyboard0"
Driver "kbd"
EndSection
Section "Monitor"
Identifier "Monitor0"
VendorName "Unknown"
ModelName "Unknown"
HorizSync 28.0 - 33.0
VertRefresh 43.0 - 72.0
Option "DPMS"
EndSection
Section "Device"
Identifier "Device0"
Driver "nvidia"
VendorName "NVIDIA Corporation"
EndSection
Section "Screen"
Identifier "Screen0"
Device "Device0"
Monitor "Monitor0"
DefaultDepth 24
SubSection "Display"
Depth 24
EndSubSection
EndSection
I can post transcripts of any other useful logs. Thanks for any advice!
|
I've run into this problem before. Unfortunately, the best answer I've been able to come up with is a hardware solution: trick the graphics card into thinking a monitor is installed by plugging a VGA terminator into the VGA output. You can make one at home or buy one; googling for "VGA terminator" returns plenty of results for both.
Another option may be to run a VNC server on the headless system, but I'm not sure whether the graphics card can render to a VNC output.
| Start X11 server on CentOS 7 without screen but with a graphics card |
1,485,111,614,000 |
I would like to run a remote OpenGL program through ssh tunneled with the -X option.
My laptop has Optimus so anything using OpenGL has to go through optirun (bumblebee). This would explain why I can't launch the program (vmd in my case, it says can't open OpenGl GLX).
Is there a way around?
|
Yes, there is, but it isn't really usable.
OpenGL uses 3d acceleration, which is practically a chip of your actual, current video card. That means, that you can't do 3d accel on remote machines.
What you can do instead:
You could use the mesa version of the 3d accelerated libraries. It means software 3d rendering, without any acceleration. It were painfully slow even on the local machine - tunneled by ssh will be unusable.
You could run it on the local video card (you can start an X server even on this remotely), and connect to that with vnc.
(2) will be probably faster. Imho, it can be used in production environment only if there is an application with uses a little bit of 3d acceleration, but not very much (for example, some dos-based games or matlab).
| optirun and ssh -X |
1,485,111,614,000 |
I'm using ffmpeg's x11grab to do some screencasting. It works pretty well except on 3D stuff. In particular it seems like 3D draw areas flicker in and out. You can see an example of it here.
The issue is present even when I capture only the screen (i.e., not adding in all the other fancy stuff and the webcam capture).
I've done a lot of googling on this issue and have found people with a similar issue, but no solution. Many suggest that it is due to OpenGL rendering directly to the hardware and bypassing X11 entirely.
Does anyone know of a way to deal with this? If it matters I'm using an nVidia graphics card.
|
I finally resolved it! The problem was to do with OpenGL as I suspected. To solve the issue, I downloaded VirtualGL. Specifically I grabbed the .deb file from here and installed it with dpkg.
Running my applications with vglrun application and then starting the screencast now works perfectly, it even runs more smoothly than it did without vgl.
| x11grab flickers in OpenGL draw areas |
1,485,111,614,000 |
I am trying to build Cube2 Sauerbraten, But I need the OpenGL and SDL2 libraries to run the makefile. (I am using ubuntu here) I tried running sudo apt-get install --yes software-properties-common g++ make then sudo apt-get install --yes libsdl2-dev then sudo apt-get install --yes freeglut3-dev and lastly, to compile, g++ main.cpp -I /usr/include/SDL2/ -lSDL2 -lGL.
I got these commands from https://gist.github.com/dirkk0/cad259e6a3965abb4178. When I run them, the first three commands work fine, but the last one did not work, giving me this error.
optiplex780@super-OptiPlex-780:~$ g++ main.cpp -I /usr/include/SDL2/ -lSDL2 -lGL
cc1plus: fatal error: main.cpp: No such file or directory
compilation terminated.
optiplex780@super-OptiPlex-780:~$
Should I replace main.cpp with the makefile?
Am I just a dunce, or is there a problem here? After installing the packages, I tried going to the ~/sauerbraten/src dorectory, and running make install. I got these errors.
optiplex780@super-OptiPlex-780:~/sauerbraten_2020_12_29_linux/sauerbraten/src$ make install
make -C enet/ all
make[1]: Entering directory '/home/optiplex780/sauerbraten_2020_12_29_linux/sauerbraten/src/enet'
make[1]: Nothing to be done for 'all'.
make[1]: Leaving directory '/home/optiplex780/sauerbraten_2020_12_29_linux/sauerbraten/src/enet'
g++ -O3 -fomit-frame-pointer -Wall -fsigned-char -o sauer_client shared/crypto.o shared/geom.o shared/stream.o shared/tools.o shared/zip.o engine/3dgui.o engine/bih.o engine/blend.o engine/blob.o engine/client.o engine/command.o engine/console.o engine/cubeloader.o engine/decal.o engine/dynlight.o engine/glare.o engine/grass.o engine/lightmap.o engine/main.o engine/material.o engine/menus.o engine/movie.o engine/normal.o engine/octa.o engine/octaedit.o engine/octarender.o engine/physics.o engine/pvs.o engine/rendergl.o engine/rendermodel.o engine/renderparticles.o engine/rendersky.o engine/rendertext.o engine/renderva.o engine/server.o engine/serverbrowser.o engine/shader.o engine/shadowmap.o engine/sound.o engine/texture.o engine/water.o engine/world.o engine/worldio.o fpsgame/ai.o fpsgame/client.o fpsgame/entities.o fpsgame/fps.o fpsgame/monster.o fpsgame/movable.o fpsgame/render.o fpsgame/scoreboard.o fpsgame/server.o fpsgame/waypoint.o fpsgame/weapon.o -Lenet/.libs -lenet -L/usr/X11R6/lib `sdl-config --libs` -lSDL_image -lSDL_mixer -lz -lGL -lrt
/bin/sh: 1: sdl-config: not found
/usr/bin/ld: cannot find -lSDL_image
/usr/bin/ld: cannot find -lSDL_mixer
collect2: error: ld returned 1 exit status
make: *** [Makefile:163: client] Error 1
optiplex780@super-OptiPlex-780:~/sauerbraten_2020_12_29_linux/sauerbraten/src$
|
Your program has many files, then a single g++ won’t be enough. A make (No arguments) command is often the right way to compile the software from the Makefile.
The Makefile is in the src folder… you should enter it (cd src) before launching make. make install compile the software if not done and install it.
According to the readme_source.txt file, it uses zlib, then the zlib1g-dev package will be helpful. Also libsdl-mixer1.2-dev and libsdl-image1.2-dev (On a Debian system, the actual version may vary. You seem to have a 2 version).
| How to install OpenGl and SDL2 libraries on ubuntu |
1,485,111,614,000 |
As a Linux fan, I want to get into OpenGL development as a hobby of mine.
I know that OpenGL is just an API that the GPU vendors must implement.
Some GPU vendors' OpenGL/Vulkan implementations are proprietary, whilst some are open source (like Intel).
Because I like open source, I want to make sure I don't use anything proprietary when I develop stuff, so how would I go about checking whether or not my GPU is currently using Mesa for rendering?
The reason I am asking is because I've gotten mixed messages online, as I have heard that you can apparently still have Mesa installed but the GPU will be using something else that is proprietary, which is why I wanted to ask this question.
Any help would be appreciated.
|
TL;DR
Run this in your terminal:
glxinfo | grep -i mesa
If you do see something in the output, then you should be using mesa. If nothing shows up, it's something else.
And if it's something else, glxinfo | grep -i vendor should tell you what it is.
Full explanation:
So first of all, it's not really about choosing which OpenGL/Vulkan implementation. The implementation you use depends on which GPU driver you use. So what you really wanna do is to ensure that you are using a GPU driver that Mesa supports.
With that said, there are not many options of GPU drivers anyway. If you are using Intel or AMD, you are most likely already using the open source drivers that work with Mesa. There's simply no other options for Intel, and the proprietary driver of AMD had been deprecated.
If you are using Nvidia, it's a little bit more complicated. As of now, there are three drivers out there - the official and proprietary Nvidia driver, the open-source version of the official Nvidia driver, and nouveau.
The first offers the best performance, but, as I said, it's proprietary, and it provides its own proprietary implementation of OpenGL. You can't use mesa with it.
The second was a very new thing that was released May this year. It is the open-sourced version of their proprietary driver. It also has it's own implementation of OpenGL, which is not Mesa. But it IS open-source, and offers comparable performance as the proprietary one.
The last one, nouveau, is the old open-source driver for Nvidia that was written by the community through reverse-engineering the proprietary Nvidia driver. This used to be the only option if you really want to use open-source driver with Nvidia's GPU, and it does work with Mesa. And since it's based on reverse-engineering, the performance sucks.
So, in conclusion, if you happened to be on Nvidia's GPU, and HAVE TO USE MESA, you could go with nouveau, though I do not recommend that due to its poor performance. If you are just looking for being open-source, install the open-source version of the official Nvidia driver. (Remember to uninstall every other driver and only install the driver you want to use.)
However, with all that being said, one thing should be clarified.
I want to make sure I don't use anything proprietary when I develop stuff
Are you actually trying to make sure that "you, yourself, when developing and/or using your computer, are not using anything proprietary", or it's just "nothing in the software that you developed would use proprietary stuff"?
If it's the later case, you don't even need to care about which driver or implementation you are using. They are all OpenGL anyways, and since OpenGL itself is open, you are not using anything proprietary in your own code. If it's the former case, then, as I've said, just install the correct driver and you should be good.
| How to check whether or not your GPU is currently using Mesa for rendering OpenGL/Vulkan? |
1,485,111,614,000 |
I'm using the following graphics settings for my KVM guests:
...
<graphics type="spice">
<listen type="none"/>
<image compression="off"/>
<gl enable="yes" rendernode="/dev/dri/by-path/pci-0000:69:00.0-render"/>
</graphics>
...
<video>
<model type="virtio" heads="1" primary="yes">
<acceleration accel3d="yes"/>
</model>
<address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
</video>
...
Which is working perfectly fine for Fedora (32/33) and Ubuntu (20.04), but I couldn't make it work on any other distro so far. Even on Fedora, it only works if you turn it on (enable="yes") after installing (won't work on the live cd). But after that, it works out of the box.
When trying the same on - let's say - manjaro, even the grub menu freezes and is unusable. I tried systemd-boot instead as well. However, if you manage to boot it, the screen will remain black (it appears to be running just fine, as the mouse integration is working despite the black screen).
Update:
It doesn't seem to be a kernel issue after all. I missed this as I had autologin enabled, but lightdm shows up just fine. It's actually the DE (XFCE) that is giving me a black screen. Gnome on Wayland and Xorg works fine. The GRUB menu is still unusable though. It seems to just be a virgl compatibility issue with GRUB? and XFCE. I might try again without EFI, but the original issue is basically resolved.
Now I wonder: What does fedora/ubuntu have that e.g.: manjaro is missing? Is it a kernel option or some driver package that I'm missing?
I tried installing virglrenderer on my manjaro guest, which didn't help (I'm not sure if it's meant for the guest either).
Update:
When using ssh on the manjaro guest with a black screen, I get:
dmesg | grep drm
[ 0.836414] [drm] pci: virtio-vga detected at 0000:00:01.0
[ 0.836420] fb0: switching to virtiodrmfb from EFI VGA
[ 0.836526] [drm] virgl 3d acceleration enabled
[ 0.836527] [drm] EDID support available.
[ 0.837182] [drm] number of scanouts: 1
[ 0.837186] [drm] number of cap sets: 2
[ 0.845823] [drm] cap set 0: id 1, max-version 1, max-size 308
[ 0.845964] [drm] cap set 1: id 2, max-version 2, max-size 688
[ 0.846341] [drm] Initialized virtio_gpu 0.1.0 0 for virtio0 on minor 0
[ 0.848777] virtio_gpu virtio0: fb0: virtio_gpudrmfb frame buffer device
[ 2.095162] systemd[1]: Condition check resulted in Load Kernel Module drm being skipped.
And the working fedora guest:
dmesg | grep drm
[ 2.164964] [drm] pci: virtio-vga detected at 0000:00:01.0
[ 2.177043] [drm] features: +virgl +edid
[ 2.177652] [drm] number of scanouts: 1
[ 2.177658] [drm] number of cap sets: 2
[ 2.193509] [drm] cap set 0: id 1, max-version 1, max-size 308
[ 2.193596] [drm] cap set 1: id 2, max-version 2, max-size 688
[ 2.193840] [drm] Initialized virtio_gpu 0.1.0 0 for virtio0 on minor 0
[ 2.217427] virtio_gpu virtio0: [drm] fb0: virtio_gpudrmfb frame buffer device
[ 3.552834] systemd[1]: Condition check resulted in Load Kernel Module drm being skipped.
And on the host (working vm has the same output, but works):
qemu-system-x86_64 \
-drive if=pflash,format=raw,readonly,file=/usr/share/edk2/ovmf/OVMF_CODE.fd \
-drive if=pflash,format=raw,readonly,file=/path/to/qemu/nvram/manjaro_VARS.fd \
-drive file=/path/to/manjaro.qcow2 \
-m 8192 -enable-kvm -M q35 -cpu host -smp 16,sockets=1,cores=16,threads=1 \
-vga virtio -display gtk,gl=on \
-usb -device usb-tablet \
-net user,hostfwd=tcp::10022-:22 -net nic
gl_version 46 - core profile enabled
vrend_renderer_fill_caps: Entering with stale GL error: 1280
GLSL feature level 430
|
The problem is that Xfwm's built-in compositor and virgl don't play nice together.
Work-around: Boot the VM with virgl=off (on the video device) or gl=off (on the display), run xfwm4-tweaks-settings in the VM, select the "Compositor" tab, and uncheck "Enable display compositing". Then shut down the VM and re-enable virgl.
picom works with Xfwm and doesn't seem to have the same issues, so if you want a compositor, install/use picom in the VM instead of using Xfwm's built-in compositor: https://wiki.archlinux.org/index.php/Picom
| KVM Virgl acceleration only works on some guests? |
1,485,111,614,000 |
Some quick background, I am running Fedora 19 x86_64 on a Dell Latitude with a 2nd gen i7 and discrete nvidia graphics card.
I have some rather obnoxious problems where the screen doesn't seem to render consistently. The freezes are irregular and short, but frequent. The system has behaved like that since install, and initially I noticed it in a certain online multiplayer 3D Java game. I thought it was lag, but single player and other games behave similarly. Then I realized it actually consistently happens in the Desktop Environment (Gnome 3) and at times is almost unusable. I had to wait for about 30 seconds typing a sentence in this question.
So what do I do to diagnose this problem? Who is most likely at fault? X? OpenGL? Graphics driver? Gnome 3? Kernel? Hardware?
I am not even sure how to check what driver is being used or whether the discrete card is being taken advantage of. Also, the cursor retains the ability to move during the freezes which even further confuses me. Why might I be able to wiggle the cursor, but nothing else (like text) will render?
|
Welp, I answered my own question after slightly more work. I had been annoyed by this problem for some time and just never made the right connection.
NVIDIA Optimus. After installing lshw and running it with the video option, I noticed two displays active. One was for the i7, the other for the NVS 4200M. Did not take long to learn about Optimus, and after disabling Optimus in the BIOS everything ran smoothly.
Though I also swapped out nouveau for the proprietary driver as nouveau was rather slow.
My battery life has consequently also increased, and average temperature has decreased. Additionally, I found a way to support Optimus. Perhaps the machines battery life and average temperature will improve again with that, as it was meant to.
| Diagnosing a freezing system, or renderer |
1,485,111,614,000 |
I just read a Phoronix article, which compared the FOSS radeon drivers a 5 years old FGLRX catalyst. As you would expect FGLRX was multiple times faster, even the feature set was not completely implemented.
The big question, not answered in the article, was why? I noticed FGLRX brings its own libGL, does Nvidia do this also? I know hardware registers are not always completely known, and yadda yadda... I still suspect that mesa is not a strong performer.
What needs to be done to reach remotely close to catalyst speed? What projects need collaboration? Which ones need to be completely ditched?
|
Well, I do not have inside information about either of the open source or proprietary projects but the answer is pretty simple from my point of view. FOSS video drivers are made by people in their free time on their specific hardware. Many times these programmers does not have the motivation, the hardware resources, the time, the knowledge or professionalism required to write so specific and difficult applications.
I personally admire their effort to make open source video drivers and Nuvou come a long way for NVidia, but regardless of the manufacturer if the development is not directly supported with specifications, knowledge and money by the hardware makers I see no way something open-source can be better than the proprietary driver.
A very positive and good example is Intel which contributes and supports the open-source drivers for their graphics chips, and it does in a way that proprietary drivers are not even made.
| Why FOSS 3d performs so badly, compared to proprietary |
1,485,111,614,000 |
EDIT: The cause (or one cause) appears to be a segfault in libEGL-nvidia, which I guess causes glxtest to fail, which causes firefox to assume the drivers are faulty (which they could partially be). I have received an update to firefox 111, but it didn’t fix the problem.
WebGL has suddenly stopped working in Firefox. My drivers seem fine. They're all detected properly, even by Firefox if I force it to. The problem appears to be that glxtest fails because it can't detect my GPU, which results in glxInfo blocklisting support.
Pastebin with full troubleshooting log (WebGL force-enabled): https://pastebin.com/cX6ZWFhL
Startup error:
[GFX1-]: No GPUs detected via PCI
[GFX1-]: glxtest: process failed (received signal 11)
Errors without force-enabled WebGL:
# WebGL 1 driver renderer
WebGL creation failed:
* WebglAllowWindowsNativeGl:false restricts context creation on this system. ()
* Exhausted GL driver options. (FEATURE_FAILURE_WEBGL_EXHAUSTED_DRIVERS)
# WebGL 2 driver renderer
WebGL creation failed:
* AllowWebgl2:false restricts context creation on this system. ()
lspci -vv output:
01:00.0 VGA compatible controller: NVIDIA Corporation Device 2507 (rev a1) (prog-if 00 [VGA controller])
Subsystem: Micro-Star International Co., Ltd. [MSI] Device c978
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 0
Interrupt: pin A routed to IRQ 138
Region 0: Memory at 50000000 (32-bit, non-prefetchable) [size=16M]
Region 1: Memory at 6000000000 (64-bit, prefetchable) [size=8G]
Region 3: Memory at 6200000000 (64-bit, prefetchable) [size=32M]
Region 5: I/O ports at 4000 [size=128]
Expansion ROM at 51000000 [virtual] [disabled] [size=512K]
Capabilities: <access denied>
Kernel driver in use: nvidia
Kernel modules: nvidiafb, nouveau, nvidia_drm, nvidia
glxinfo output:
name of display: :0
display: :0 screen: 0
direct rendering: Yes
server glx vendor string: NVIDIA Corporation
server glx version string: 1.4
server glx extensions:
<extension list clipped for readability>
client glx vendor string: NVIDIA Corporation
client glx version string: 1.4
client glx extensions:
<extension list clipped for readability>
Memory info (GL_NVX_gpu_memory_info):
Dedicated video memory: 8192 MB
Total available memory: 8192 MB
Currently available dedicated video memory: 7744 MB
OpenGL vendor string: NVIDIA Corporation
OpenGL renderer string: NVIDIA GeForce RTX 3050/PCIe/SSE2
OpenGL core profile version string: 4.6.0 NVIDIA 525.85.05
OpenGL core profile shading language version string: 4.60 NVIDIA
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
OpenGL core profile extensions:
<extension list clipped for readability>
OpenGL version string: 4.6.0 NVIDIA 525.85.05
OpenGL shading language version string: 4.60 NVIDIA
OpenGL context flags: (none)
OpenGL profile mask: (none)
OpenGL extensions:
<extension list clipped for readability>
OpenGL ES profile version string: OpenGL ES 3.2 NVIDIA 525.85.05
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.20
OpenGL ES profile extensions:
<extension list clipped for readability>
nvidia-smi output:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.85.05 Driver Version: 525.85.05 CUDA Version: 12.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... Off | 00000000:01:00.0 On | N/A |
| 0% 44C P8 7W / 130W | 206MiB / 8192MiB | 2% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
I'm sorry if this is the wrong place, it's just been really difficult to find help and the issue is super frustrating.
|
This issue appears to be fixed in version 114. I've reset the settings I used when I first got this issue to try to get Firefox to use WebGL and it still works without that. I haven't seen this get mentioned anywhere so I don't know if this was caused by another fix or if something on my end changed, but the problem is gone for me.
| Firefox cannot detect GPU (libEGL segfault) |
1,485,111,614,000 |
From What I've searched, my openGL renderer should show my discrete GPU but strangely, it shows my integrated GPU.
Here is my lspci | grep -E "VGA|Display"
00:02.0 VGA compatible controller: Intel Corporation HD Graphics 620 (rev 02)
01:00.0 Display controller: Advanced Micro Devices, Inc. [AMD/ATI] Topaz XT [Radeon R7 M260/M265 / M340/M360 / M440/M445] (rev c3)
and my glxinfo | grep OpenGL
OpenGL vendor string: Intel Open Source Technology Center
OpenGL renderer string: Mesa DRI Intel(R) HD Graphics 620 (Kaby Lake GT2)
I have an Ubuntu 18.04
Running on Inspiron 15 5567 16GB ram which has Radeon R7 M440
I've also tried switcharoo with no avail.
|
You need to set DRI_PRIME value before running the programs.
Example
DRI_PRIME=1 glxinfo | grep OpenGL
This is assumming you already set the proper provider
related article : PRIME
| Why is my OpenGL renderer shows my CPU? |
1,485,111,614,000 |
I need to upgrade my OpenGL version from 3.0 to 3.1. Stackexchange is full of postings that tackle specific situations, and it proved difficult for me to see the tree from the wood, not to mention to estimate the ageing of the trees, so to speak.
Therefore, I have collected the following information on my situation, only to ask another case-specific question. The questions are
Whether the upgrade is possible
Which steps must/can be taken to that end (please be as specific as possible)
The situation is the following:
OS: Ubuntu 14.04 LTS
Kernel, from uname -vr: 4.4.0-96-generic #119~14.04.1-Ubuntu SMP Wed Sep 13 08:40:48 UTC 2017
Device, from lshw -c video: VGA compatible controller GT218 [GeForce 210] NVIDIA --- this does support OpenGL 3.1 as from the vendor's specs
Driver from lshw -c video: nouveau
Nouveau info from dpkg -l | grep nouveau
ii libdrm-nouveau2:amd64 2.4.67-1ubuntu0.14.04.2 amd64 [...]
ii libdrm-nouveau2:i386 2.4.67-1ubuntu0.14.04.2 i386 [...]
ii xserver-xorg-video-nouveau-lts-xenial 1:1.0.12-1build2~trusty1 amd64 [...]
OpenGL info from glxinfo | grep OpenGL
OpenGL vendor string: nouveau
OpenGL renderer string: Gallium 0.4 on NVA8
OpenGL core profile version string: 3.3 (Core Profile) Mesa 11.2.0
OpenGL core profile shading language version string: 3.30
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
OpenGL core profile extensions:
OpenGL version string: 3.0 Mesa 11.2.0
OpenGL shading language version string: 1.30
OpenGL context flags: (none)
OpenGL extensions:
Additional info available upon request. For example synaptic lists 28 installed packages responding to the search term 'mesa', but I could not say which one is relevant.
|
Summarizing comments: the application probably misbehaves.
OpenGL core profile version string: 3.3 (Core Profile) Mesa 11.2.0
This means you have Mesa 11.2 installed, and the maximum supported OpenGL version is 3.3.
Now, why does the application say otherwise? The most common way to query OpenGL version used to be a call to glGetString(GL_VERSION), and that's what the application uses.
Proof it uses that? The MESA_GL_VERSION_OVERRIDE environment variable alters the reported version to whatever you set it to. And setting it to 3.1 makes the application stop to complain.
Now, like all OpenGL functions, glGetString requires an OpenGL context to be active. However, since the release of OpenGL 3.2, to create en OpenGL context, you must state beforehand the version you want. This lets later versions enable compatibility when programs use an older version(*).
The fun thing is here: the version reported by glGetString depends on which version was selected when creating the context. This leads badly implemented and old applications that don't select a profile explicitly to believe the version they asked is the maximum version supported.
And if the application did not select a version, a compatibility context is created automatically with an older version. I think this is the one you see on this line:
OpenGL version string: 3.0 Mesa 11.2.0
If this is the actual issue, upgrading will change nothing. But you can keep the MESA_GL_VERSION_OVERRIDE=3.1 trick. It should load a 3.1 profile and make the program happy while waiting for it to be fixed.
(*) About profiles. This is something new with OpenGL 3.2, OpenGL feature set can be selected at runtime, by letting the program request an OpenGL version, which the OpenGL implementation will “downgrade” to. That works from version 3.2 forward, leaving the question of what to do with all the older stuff, especially as OpenGL3 is a major revamp of the API (glBegin / glVector and stuff are gone). The choice was made to split the API in two profiles: Core and Compatibility. The compatibility context retains old, obsolete calls while the core context does away with them.
Support of a compatibility context is completely optional though, and while most vendors provide one that roughly matches the time of the split (from 3.0 to 3.2), few bother making newer versions of the compatibility context. That's what @Ruslan was suggesting in his answer: Mesa only supports the compatibility profile for OpengGL 3.0, but nVidia supports higher versions as well. That could make your program happy without having to lie to it.
| Updating OpenGL from 3.0 to 3.1 |
1,373,311,224,000 |
I want to try out WebGL as I'm learning OpenGL currently and, well yeah, kind of interested in how WebGL looks like.
I tried some demo sites like this or that. But unfortunately they don't work. I get the warning both in firefox and chromium:
This page requires a browser that supports WebGL.
Browsing to the official WebGL homepage tells me the following:
Hmm. While your browser seems to support WebGL, it is disabled or unavailable. If possible, please ensure that you are running the latest drivers for your video card.
I've found this answer on AskUbuntu, but enabling the software rendering list override does not enable WebGL for me in chromium.
I'm running Sabayon Linux x86_64 with an NVIDIA GPU:
~ # uname -a
Linux qdoe 3.9.0-sabayon #1 SMP Thu Jun 27 07:53:45 UTC 2013 x86_64 Intel(R) Core(TM) i7-3930K CPU @ 3.20GHz GenuineIntel GNU/Linux
~ # lspci | grep VGA
02:00.0 VGA compatible controller: NVIDIA Corporation GK107 [GeForce GT 640] (rev a1)
~ # firefox -v
Mozilla Firefox 20.0.1
~ # chromium --version
Chromium 27.0.1453.110
Using following drivers:
~ # equo search x11-drivers/nvidia-drivers-319.32#3.9.0-sabayon
>> @@ Package: x11-drivers/nvidia-drivers-319.32#3.9.0-sabayon branch: 5, [sabayon-limbo]
>> Available: version: 319.32 ~ tag: 3.9.0-sabayon ~ revision: 0
>> Installed: version: 319.32 ~ tag: 3.9.0-sabayon ~ revision: 0
>> Slot: 0,3.9.0-sabayon
>> Homepage: http://www.nvidia.com/
>> Description: NVIDIA GPUs kernel drivers
>> License: NVIDIA-r1
~ # lsmod | grep nvidia
nvidia 9376114 23
Any ideas how to enable WebGL in either Chrome or Firefox?
|
Well, according to the WebGL public wiki, both firefox and chrome support WebGL with Nvidia GPUs in X11/Linux.
In my case the wrong graphic driver was selected.
~ # eselect opengl list
Available OpenGL implementations:
[1] nvidia
[2] xorg-x11 *
Setting it back to nvidia fixed my issues with WebGL.
~ # eselect opengl set nvidia
Switching to nvidia OpenGL interface... done
| How to enable WebGL in Chrome or Firefox? |
1,373,311,224,000 |
Recently, I switched from using a dedicated ATI video card and proprietary drivers to the onboard Intel video card and default/generic Intel drivers because the ATI video card was damaged and I couldn't get the proprietary driver to play nicely with my X server.
Anyway, now I get core dumps whenever a program that uses OpenGL runs. E.g.:
[birdsnest ~]% glxinfo -v
name of display: :0
zsh: segmentation fault (core dumped) glxinfo -v
T̶h̶e̶r̶e̶ ̶i̶s̶ ̶n̶o̶t̶h̶i̶n̶g̶ ̶s̶u̶s̶p̶i̶c̶i̶o̶u̶s̶ ̶i̶n̶ ̶m̶y̶ ̶/̶v̶a̶r̶/̶l̶o̶g̶/̶x̶o̶r̶g̶.̶0̶.̶l̶o̶g̶,̶ ̶a̶n̶d̶ ̶a̶l̶l̶ ̶a̶p̶p̶e̶a̶r̶s̶ ̶t̶o̶ ̶b̶e̶ ̶w̶e̶l̶l̶:̶
[ 33.580] (II) LoadModule: "glx"
[ 33.604] (II) Loading /usr/lib64/xorg/modules/extensions/libglx.so
[ 33.645] (II) Module glx: vendor="Advanced Micro Devices, Inc."
[ 33.645] compiled for 6.9.0, module version = 1.0.0
[ 33.645] Loading extension GLX
[ 33.645] (==) Matched intel as autoconfigured driver 0
[ 33.645] (==) Matched vesa as autoconfigured driver 1
[ 33.645] (==) Matched modesetting as autoconfigured driver 2
[ 33.645] (==) Matched fbdev as autoconfigured driver 3
[ 33.645] (==) Assigned the driver to the xf86ConfigLayout
[ 33.645] (II) LoadModule: "intel"
[ 33.645] (II) Loading /usr/lib64/xorg/modules/drivers/intel_drv.so
[ 33.700] (II) Module intel: vendor="X.Org Foundation"
[ 33.700] compiled for 1.12.99.905, module version = 2.20.2
[ 33.700] Module class: X.Org Video Driver
[ 33.700] ABI class: X.Org Video Driver, version 13.0
[ 33.700] (II) LoadModule: "vesa"
[ 33.700] (II) Loading /usr/lib64/xorg/modules/drivers/vesa_drv.so
[ 33.709] (II) Module vesa: vendor="X.Org Foundation"
[ 33.709] compiled for 1.13.0, module version = 2.3.2
[ 33.709] Module class: X.Org Video Driver
[ 33.709] ABI class: X.Org Video Driver, version 13.0
[ 33.709] (II) LoadModule: "modesetting"
[ 33.710] (II) Loading /usr/lib64/xorg/modules/drivers/modesetting_drv.so
[ 33.723] (II) Module modesetting: vendor="X.Org Foundation"
[ 33.723] compiled for 1.13.0, module version = 0.5.0
[ 33.723] Module class: X.Org Video Driver
[ 33.723] ABI class: X.Org Video Driver, version 13.0
[ 33.723] (II) LoadModule: "fbdev"
[ 33.723] (II) Loading /usr/lib64/xorg/modules/drivers/fbdev_drv.so
[ 33.731] (II) Module fbdev: vendor="X.Org Foundation"
[ 33.731] compiled for 1.12.99.905, module version = 0.4.3
[ 33.731] Module class: X.Org Video Driver
[ 33.731] ABI class: X.Org Video Driver, version 13.0
[ 33.731] (II) intel: Driver for Intel Integrated Graphics Chipsets: i810,
i810-dc100, i810e, i815, i830M, 845G, 854, 852GM/855GM, 865G, 915G,
E7221 (i915), 915GM, 945G, 945GM, 945GME, Pineview GM, Pineview G,
965G, G35, 965Q, 946GZ, 965GM, 965GME/GLE, G33, Q35, Q33, GM45,
4 Series, G45/G43, Q45/Q43, G41, B43, B43, Clarkdale, Arrandale,
Sandybridge Desktop (GT1), Sandybridge Desktop (GT2),
Sandybridge Desktop (GT2+), Sandybridge Mobile (GT1),
Sandybridge Mobile (GT2), Sandybridge Mobile (GT2+),
Sandybridge Server, Ivybridge Mobile (GT1), Ivybridge Mobile (GT2),
Ivybridge Desktop (GT1), Ivybridge Desktop (GT2), Ivybridge Server,
Ivybridge Server (GT2)
[ 33.731] (II) VESA: driver for VESA chipsets: vesa
[ 33.731] (II) modesetting: Driver for Modesetting Kernel Drivers: kms
[ 33.731] (II) FBDEV: driver for framebuffer: fbdev
[ 33.731] (++) using VT number 1
edit: backtrace from core dump (I have no idea how useful this is or what to make of it):
Core was generated by `glxinfo -l'.
Program terminated with signal 11, Segmentation fault.
#0 0x000000341ea0a240 in xcb_glx_query_server_string_string_length () from /usr/lib64/libxcb-glx.so.0
Missing separate debuginfos, use: debuginfo-install glx-utils-9.0-0.8.el6_4.3.x86_64
(gdb) bt
#0 0x000000341ea0a240 in xcb_glx_query_server_string_string_length () from /usr/lib64/libxcb-glx.so.0
#1 0x00000031fa444bf4 in __glXQueryServerString () from /usr/lib64/libGL.so.1
#2 0x00000031fa420ca0 in ?? () from /usr/lib64/libGL.so.1
#3 0x00000031fa41d92d in ?? () from /usr/lib64/libGL.so.1
#4 0x00000031fa41ea8f in glXChooseVisual () from /usr/lib64/libGL.so.1
#5 0x000000000040163e in ?? ()
#6 0x0000000000402d8a in ?? ()
#7 0x000000340c61ecdd in __libc_start_main () from /lib64/libc.so.6
#8 0x0000000000401179 in ?? ()
#9 0x00007fffd0ae5628 in ?? ()
#10 0x000000000000001c in ?? ()
#11 0x0000000000000002 in ?? ()
#12 0x00007fffd0ae74d2 in ?? ()
#13 0x00007fffd0ae74da in ?? ()
#14 0x0000000000000000 in ?? ()
So I'm not sure if this is a driver issue or something else, and I have no clue how to debug. Does anybody know how I should go about debugging this issue?
edit: I just noticed the line Module glx: vendor="Advanced Micro Devices, Inc." in my xorg log. I noticed that file has a fairly recent last modified time - around the time I installed the ATI video drivers. Is it possible that this library was clobbered when I installed the proprietary ATI driver?
|
I figured it out. As I suspected, there is a conflict between the shared libraries that AMD wanted to use and the shared libraries that the Intel driver uses. The solution was to uninstall the AMD driver. I did this by running an uninstall script in /usr/share/ati. After I ran this script, I had to restore my /etc/X11/xorg.conf that I had been using with the Intel drivers, since the uninstall script attempted to clobber it.
| opengl/glx core dumps - generic intel linux drivers, RHEL6 |
1,373,311,224,000 |
I am working on OpenGL shaders, and I need uint64_t types, etc...
However, when I do glxinfo, this extension is not in the list.
I am using Mesa 18.0.5, and this page tells that the extension is supported for radeonsi drivers from 17.1.0.
My GPU is a AMD Radeon HD 8730M. I am using the radeon driver, but switching to amdgpu is not helping.
Question: how can I achieve to use uint64 in my shaders? By switching to another driver? By updating Mesa? Or is my GPU too old?
The shader I try to compile:
#version 450
#extension GL_ARB_gpu_shader5 : enable
#extension GL_ARB_gpu_shader_int64 : enable
void main()
{
uint64_t foo = 0ul;
}
I got:
0:3(12): warning: extension `GL_ARB_gpu_shader_int64' unsupported in fragment shader
0:7(11): error: syntax error, unexpected NEW_IDENTIFIER, expecting ',' or ';'
glxinfo output:
name of display: :0.0
display: :0 screen: 0
direct rendering: Yes
server glx vendor string: SGI
server glx version string: 1.4
server glx extensions:
GLX_ARB_create_context, GLX_ARB_create_context_profile,
[...]
OpenGL vendor string: Intel Open Source Technology Center
OpenGL renderer string: Mesa DRI Intel(R) Haswell Mobile
OpenGL core profile version string: 4.5 (Core Profile) Mesa 18.0.5
OpenGL core profile shading language version string: 4.50
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
OpenGL core profile extensions:
GL_3DFX_texture_compression_FXT1, GL_AMD_conservative_depth,
[...]
|
Got it.
Mesa was indeed using my integrated graphic chipset. By launching all the commands with the environment variable DRI_PRIME=1, I was able to directly use my GPU, thus enabling the asked extensions.
Howerver, I am not sure if setting this environment variable each time or globally is a good solution.
| Unable to use OpenGL ARB_gpu_shader_int64 extension with Mesa |
1,373,311,224,000 |
I recently got a new laptop (Thinkpad T480) which has Intel integrated "UHD Graphics 620" and an Nvidia MX150, and I installed Ubuntu 18.04. I installed the nvidia driver alright, and I believe I am using the Nvidia card successfully to run my laptop's display/external monitors.
However, I have a problem displaying 3D content: when I try to create a 3D plot in Mathematica, the program simply crashes (this does not happen when I switch back to using my Intel card with prime-select). Furthermore, when I try to launch Steam, I get the error "OpenGL GLX extension not supported by display" (and again this does not occur and steam works normally when I use my integrated graphics). Finally, with the nvidia card selected, I am unable to even login to the standard gnome desktop environment (I simply get booted back out to the login screen). Luckily I normally use xmonad, and that seems to work fine.
I tried reinstalling xserver-xorg which was suggested somewhere online but that didn't help. I saw other information about installing Bumblebee, but all of that seems to be from many years ago (and the latest release of Bumblebee is over 5 years old so I was a little wary about it). Nevertheless, I tried installing Bumblebee and, after modifying /etc/bumblebee/bumblebee.conf to use the correct directory for the libGL.so.1 driver, I was able to run a game through Steam. I never tried running Steam itself using optirun but I ran Civilization V with optirun through Steam and it seemed to work as intended, and I could see that the Nvidia card was being used with the program NVTOP. Civilization V does involve 3D graphics but I'm not sure if it uses OpenGL. I also tried running Minecraft (which I think does use OpenGL) through optirun and just got a window with a black screen. I tried optirun glxgears and got an error that said
X Error of failed request: BadMatch (invalid parameter attributes)
I did some more research and found that perhaps Bumblebee was not the way to go (multiple reports of bugs with Ubuntu 18.04)... so now I am back in the situation I described in the first and second paragraphs above. I figured it was time to ask for help.
Below are the outputs to some commands I have seen in other questions related to this issue:
Here is my output when I try to run glxinfo:
name of display: :0
Error: couldn't find RGB GLX visual or fbconfig
Here is my output when I try to run glxgears:
Error: couldn't get an RGB, Double-buffered visual
Here is my output when I run lspci -nnnk | grep "VGA\|'Kern'\|3D\|Display" -A2:
00:02.0 VGA compatible controller [0300]: Intel Corporation UHD Graphics 620 [8086:5917] (rev 07)
Subsystem: Lenovo UHD Graphics 620 [17aa:225e]
Kernel driver in use: i915
--
01:00.0 3D controller [0302]: NVIDIA Corporation GP108M [GeForce MX150] [10de:1d10] (rev a1)
Subsystem: Lenovo GP108M [GeForce MX150] [17aa:225e]
Kernel driver in use: nvidia
|
I tried again on a fresh install of Ubuntu 18.04 and installed the Nvidia driver before anything else, and that worked (everything seems to be working now). I believe something else I had previously installed (not sure what) was conflicting with some of the files required by my graphics setup.
| Problems displaying 3D content with Nvidia graphics card in Ubuntu 18.04 |
1,373,311,224,000 |
I'm trying to find the source package for libglut.so.
I found this website that explains how to do it in ubuntu:https://askubuntu.com/questions/481/how-do-i-find-the-package-that-provides-a-file
But I need to know how to do it in CentOS.
My system is CentOS 6.4
|
Assuming you already have the package installed, rpm -q --whatprovides /usr/lib64/libglut.so.3 will show the name of the package providing the file.
(Inapplicable as you don't want to use yum) : if not already installed, then yum provides /usr/lib64/libglut.so.3 or simply yum provides 'libglut*' will search the yum repos and provides the information.
| Is there a way that I can find the source package of a file in CentOS without using yum? |
1,373,311,224,000 |
I'm running the following code in the python console
import pygame
pygame.init()
Here is the output from the terminal
libGL error: MESA-LOADER: failed to open iris: /home/souvik/anaconda3/envs/game_env/bin/../lib/libstdc++.so.6: version `GLIBCXX_3.4.29' not found (required by /usr/lib/dri/iris_dri.so) (search paths /usr/lib/dri)
libGL error: failed to load driver: iris
libGL error: MESA-LOADER: failed to open iris: /home/souvik/anaconda3/envs/game_env/bin/../lib/libstdc++.so.6: version `GLIBCXX_3.4.29' not found (required by /usr/lib/dri/iris_dri.so) (search paths /usr/lib/dri)
libGL error: failed to load driver: iris
libGL error: MESA-LOADER: failed to open swrast: /home/souvik/anaconda3/envs/game_env/bin/../lib/libstdc++.so.6: version `GLIBCXX_3.4.29' not found (required by /usr/lib/dri/swrast_dri.so) (search paths /usr/lib/dri)
libGL error: failed to load driver: swrast
X Error of failed request: BadValue (integer parameter out of range for operation)
Major opcode of failed request: 152 (GLX)
Minor opcode of failed request: 3 (X_GLXCreateContext)
Value in failed request: 0x0
Serial number of failed request: 99
Current serial number in output stream: 100
I've just installed Manjaro Linux "5.10.42-1-MANJARO".
I'm guessing there's some driver error for openGL or something. I'd like to know how to resolve this issue.
|
I had the same issue with `GLIBCXX_3.4.29' not found.
First you should check if you can see GLIBCXX_3.4.29 in your conda lib:
strings ~/miniconda3/lib/libstdc++.so.6 | grep GLIBCXX_3.4.2
If not you should check if it exists in your systems lib:
strings /lib/libstdc++.so.6 | grep GLIBCXX_3.4.2
If this shows the version you can simply copy the file from the /lib to the miniconda3/lib with:
copy /lib/libstdc++.so.6 ~/miniconda3/lib/
but also check where the lib folder is located in your miniconda environment!
| Trying to run pygame on my conda environment, on my fresh Manjaro install and getting libGL MESA-LOADER error |
1,373,311,224,000 |
When I try to use plot() in octave-cli I get an empty window instead of a plot and the following error:
Insufficient GL support
which suggests that the glx module is missing from the X server configuration. So I added
Section "Module"
Load "glx"
EndSection
to my otherwise empty X configuration file at /usr/local/etc/X11/xorg.conf.
It didn't help.
What is interesting is that I've got the following logs in /var/log/Xorg.0.log:
(EE) Failed to initialize GLX extension (Compatible NVIDIA X driver not found)
(I cannot recover the whole log file but before that message there is an information that mesa-dri has already successfully loaded GLX).
I tried to set up Nvidia card to support GLX for me but I could find a way to do it.
What can I do to bring GL support to my system?
Details
OS: FreeBSD 12.0-CURRENT FreeBSD 12.0-CURRENT #2 r324767 amd64 with a GENERIC kernel build from source.
Hardware: Lenovo Yoga 3 14 with Nvidia GeForce 940M and Intel Broadwell HD Graphics 5500.
|
tl;dr
The solution is pretty simple:
pkg remove nvidia-driver nvidia-xconfig nvidia-settings xorg drm-next-kmod
pkg autoremove
pkg install xorg drm-next-kmod
What happened?
It turns out that nvidia-driver overwrites files previously installed by xorg and/or drm-next-kmod. As a result the X server is unable to determine, what is really in charge of supporting GL.
AFAIK, the technology used in this machine is called Optimus (more here). It doesn't seem to be well supported on FreeBSD and its configuration is not obvious. Because of that it is not recommended to mix those two GPUs on FreeBSD. Just stick to one of them (I've choosen the Intel card).
References
Nvidia drivers vs. Intel drivers on thin clients: https://forums.freebsd.org/threads/7887/#post-46059
Intel, Nvidia, Optimus in xorg.conf: https://forums.freebsd.org/threads/45510/#post-254225
| Missing GL support on FreeBSD with Intel graphics |
1,373,311,224,000 |
I was trying to run a C++ program which requires GLX version 1.3 to run. When I check the version of GLX after directly logging into a Fedora computer by typing glxinfo | grep "version" I get that the GLX version is 1.4. However, when I SSH into that same computer as the same user from my Windows 8 laptop with PuTTY, I get that the GLX version is 1.2 after typing the same command.
Why would the version of GLX on the Linux computer be dependent on whether or not I used SSH to log into the machine? Furthermore, is there a way I can use the GLX version 1.4 that (appears to) exists on the Fedora computer through SSH?
I have limited intuition as to the answers to the above questions, but when I asked someone else with more Linux knowledge than me, he suggested that it might have to do with some sort of configuration file being run when directly logging in that is not run when using SSH - the idea being that there might theoretically exist many versions of GLX on the computer but the version being selected is different in the two scenarios. How would I verify that this is the cause? And more importantly how would I then have the newer version selected when I use SSH?
By the way, I have X11 forwarding set up on my Windows computer (with Xming) and it is working fine, but the output of the version of GLX that is given by glxinfo | grep "version" seems to me like it would be independent of this.
Also I am not sure if it matters, but I first SSHed into a remote access server and then from there used ssh -Y to SSH into the computer which I knew had the GLX version 1.4 when logging in directly.
Thank you for your help!
|
glxinfo reports the capabilities of the X server pointed at by the DISPLAY variable. When you log in directly to your Fedora workstation, that’s your Fedora X server. When you log in using PuTTY with X forwarding, that’s Xming. That’s why you get different results.
The whole point is to determine the capabilities of the system that’s displaying, not those of the system where programs are running.
| GLX version is different when using SSH versus logging in directly? |
1,373,311,224,000 |
I'm developing OpenGL game and I've copied portion of code with similar function code was partially modified for it's new function, but there was still some bugs. This code was calling OpenGL rendering functions with wrong data, parameters.
After calling OpenGL functions with wrong data/arguments whole system freezes and I'm not even able to switch to console CtrlAltF1.
This disappoints me, because Linux should be stable software/OS. Then, why can bugged OpenGL program crash whole system?
|
Given the "monolithic" nautre of the Linux kernel, an error in code that runs in the highest privilege level of the CPU, usually entitled "kernel-mode", can crash the whole system.
There are three reasons for this:
Such code can directly access the memory space of any other code. So it is possible for such code to corrupt the kernel itself, running drivers, etc.
Such code can directly access I/O devices. It's quite possible to misconfigure or set the wrong bits at wrong times on I/O devices in a way that causes the entire system to lockup. Non-buggy device drivers won't let user code do anything to hardware that could cause an unstable system, but buggy, beta, or poorly written (or wrong) drivers just might.
Code that encounters a problem or exception it can't handle doesn't have a higher level to "throw" to. So a kernel exception will crash the system.
So I don't know to what extent OpenGL works in the kernel or with the graphics driver but I hope this helps.
| How can bad OpenGL calls cause whole system crash? |
1,373,311,224,000 |
I am using KDE neon 5.21 with Ubuntu 20.04 LTS on a HP Elitebook. The system is running pretty smooth but there is one major inconvenience:
The desktop sometimes becomes extremely laggy. The mouse is still moving smoothly but every other action (moving windows, typing text, every click) takes a delay of about one second. This happens after one of the following actions (still discovering more):
specifically when closing the application guvcview
applying different window decoration style via settings
applying different compositor settings
adding a property to a window to force it to take a specific size
Quitting and restarting the session usually fixes the issue, but not always. I have also tested other distributions such as openSuse KDE on this machine and the issue is exactly the same.
The only thing that fixes this issue permanently is swtching the compositor from OpenGL 2.0 or 3.1 to XRender. So I think OpenGL might be the culprit somehow. I still would like to use OpenGL as it feels more smooth.
Here is the output of inxi -G, in case that might be helpful.
Graphics: Device-1: Intel UHD Graphics driver: i915 v: kernel
Display: x11 server: X.Org 1.20.9 driver: modesetting unloaded: fbdev,vesa
resolution: 1920x1080~60Hz, 2560x1440~60Hz
OpenGL: renderer: Mesa Intel UHD Graphics (CML GT2) v: 4.6 Mesa 20.2.6
I can, of course, provide more output if desired.
|
Since one of the latest updates - unfortunately I don't know which one exactly did the trick - the issue seems to be gone. My compositor is set to OpenGL and the lagging cannot be reproduced anymore.
| KDE neon desktop becomes laggy after closing applications or applying settings |
1,373,311,224,000 |
Is it possible to run an opengl application like glxgears from the command line without starting a desktop environment?
It should directly go to exclusive full screen mode.
|
It is not possible to run an application meant for X in the command line.
But like @cylgalad said, you can have any desktop environment and put that application to run exclusively.
Try to install a lightweight desktop environment, like xfce or fluxbox.
| Running OpenGL app without desktop |
1,373,311,224,000 |
MY school got donated 35 computers that are 4 years old. Each computer has 32 bit architecture. I've taken one to use it for and AR Sandbox (project by Oliver Kreylos). For that I've installed Linux Mint Version 18.3 "SYLVIA" with MATE desktop for 32 bit architecture. Then I have installed all software needed by the AR Sandbox, since now referred to as SARndbox. Then I got to the last step of this installation process, the ./bin/CalibrateProjector -s 1024 768 step.
When I run that I get this error:
~/src/SARndbox-2.4 $ ./bin/CalibrateProjector -s 1024 768
CalibrateProjector: Capturing 120 background frames...Vrui: Caught exception GLExtensionManager: Extension GL_EXT_gpu_shader4 not supported by local OpenGL while initializing rendering windows
done
which means I don't have drivers compatible with openGL which are necessary to run this program. I've installed the SARndbox in three different computers and this is the first time I got this error.
When I run lspci | grep VGA, I get 00:02.0 VGA compatible controller: Intel Corporation Xeon E3-1200 v2/3rd Gen Core processor Graphics Controller (rev 09).
Then glxinfo | grep Vendor gives me server glx vendor string: SGI
client, glx vendor string: Mesa Project, and SGI
OpenGL vendor string: Intel Open Source Technology Center which means I don't have the necessary drivers installed. I have started using Linux three months ago and I still don't know how exactly drivers work in Linux, I have read that normally Linux comes with all the necessary drivers out of the box but sometimes there are proprietary drivers which need to be installed from their site. And I have already searched for quite a while but I didn't find the drivers for Linux.
Could anyone tell me where to find them? I think I could install them by myself but if you will, could you explain me how to install them too?
Thank you in advance.
|
Extension GL_EXT_gpu_shader4 not supported by local OpenGL while initializing rendering windows
doesn’t mean that you don’t have the appropriate OpenGL drivers, it means that the software you’re trying to run isn’t written correctly. It should either request GL_EXT_gpu_shader4 using a core profile (not a compatibility profile), or it should target OpenGL 3.0 or greater, which incorporate the features provided by gpu_shader4 as a core feature (no extension needed).
The “Intel Open Source Technology Center” vendor string means you do have OpenGL drivers running. glxinfo | grep version will tell you exactly what version of OpenGL your system supports.
Note that Xeon E3 systems are 64-bit systems, so unless you have a particular reason to run a 32-bit distribution, you should install the 64-bit version instead. Note too the caveat in the instructions you’re following: “The following instructions are no longer up-to-date.”
| Need Intel Xeon E3-1200 drivers for Linux Mint 18 |
1,373,311,224,000 |
I have fedora 26, I have latest codeblocks and freeglut and freeglut-devel.
In codeblocks wizard I pick glut project. It asks me to enter a location but it can't detect it. I tried /usr and /usr/include and usr/include/GL. How to get it working?
|
A member on codeblocks forums called Jens solved my problem
Create a global variable Settings -> Global variables with the name glut and enter /usr in base and /usr/lib64 in lib
And keep the default $(#glut) as location in the wizard.
| Freeglut on Fedora 26 and codeblocks |
1,373,311,224,000 |
I'm trying to build a simple application using Qt 5.7 on CentOS 7.3. But when I try to compile it, I get the following errors:
cannot find -lGL
collect2: error: ld returned 1 exit status
The application code is irrelevant because any GUI application (Qt Widgets or QtQuick) gives the same errors.
My graphics card is Intel HD4000 Graphics which comes with Intel i7-3610QM CPU.
What can be the problem? How do I solve it?
Thanks in advance.
|
-lGL means libGL.so. Finding the package providing libGL.so :
yum provides */libGL.so
Install the package(s) : # yum install mesa-libGL-devel mesa-libGLU-devel
| ld cannot find -lGL on CentOS 7 |
1,373,311,224,000 |
I am trying to install OpenGL on my Fedora.
But first I am confused with some concepts.
I am using ATI Radeon 5470 card and I downloaded the driver from the official website. Here is the link
I want to ask
Q1: is this equivalent to OpenGL?
Q2: How to uninstall the driver I have installed.
PS:
I used
sh ati-driver-installer-11-11-x86.x86_64.run
in the command line. A GUI pops up and after several clicks. The driver is installed, which is quite simple.
Answer Q2 by myself:
For the second question:
There is a script installed on my system
sudo sh /usr/share/ati/fglrx-uninstall.sh
|
AFAIK they provide their own OpenGL implementation with drivers, so you should already have it installed.
You should've had another open source implementation before installing drivers though, likely Mesa.
Tip: I've never had to install OpenGL explicitly in my life.
| Installing OpenGL in fedora 14 |
1,373,311,224,000 |
asdfdsgAlthough it may seem incredible, I have the problem that GL does not open if X is open, but if X is not open it does open.
And I know this because I tested it with the video game Warzone 2100, when I log out and there is only a TTY terminal, it does detect GL and open the game, and I can play as normal.
I'm getting this error along with a message in Arabic, which basically says the same thing as the error but in that language:
warzone2100
fatal |09:34:57: [wzMainScreenSetup_CreateVideoWindow:2927] Can't create a window, because: GLX is not supported
The problem is specifically that when X is on it does not detect GL. Apparently it's not a bug in the video game, because glxgears doesn't work for me either.
Error glxgears:
glxgears
Error: couldn't get an RGB, Double-buffered visual
ldd glxgears
/usr/bin/glxgears
linux-vdso.so.1 (0x00007ffc681d6000)
libGL.so.1 => /usr/lib/libGL.so.1 (0x00007fae6c09d000)
libX11.so.6 => /usr/lib/libX11.so.6 (0x00007fae6bf5a000)
libm.so.6 => /usr/lib/libm.so.6 (0x00007fae6be6d000)
libc.so.6 => /usr/lib/libc.so.6 (0x00007fae6bc8b000)
libGLdispatch.so.0 => /usr/lib/libGLdispatch.so.0 (0x00007fae6bbd3000)
libGLX.so.0 => /usr/lib/libGLX.so.0 (0x00007fae6bb9f000)
libxcb.so.1 => /usr/lib/libxcb.so.1 (0x00007fae6bb74000)
/lib64/ld-linux-x86-64.so.2 => /usr/lib/ld-2.38.so (0x00007fae6c12d000)
libXau.so.6 => /usr/lib/libXau.so.6 (0x00007fae6bb6f000)
libXdmcp.so.6 => /usr/lib/libXdmcp.so.6 (0x00007fae6bb67000)
realpath libGL
realpath /usr/lib/libGL.so.1
/usr/lib/libGL.so.1.7.0
Video info
Video-Info 1.5.1 - sáb 23 dic 2023 on LxPupSc64 20.06 - Linux 5.7.2-lxpup64 x86_64
Chip description:
0.0 VGA compatible controller: NVIDIA Corporation GF119 [GeForce GT 610] (rev a1)
X Server: Xorg Driver used: nouveau
X.Org version: 21.1.10
dimensions: 1280x1024 pixels (312x234 millimeters)
depth of root window: 24 planes
uname -a
uname -a
Linux puppypc19012 5.7.2-lxpup64 #1 SMP Sat Jun 13 11:12:52 BST 2020 x86_64 GNU/Linux
inxi
inxi
CPU~Dual core Pentium E5700 (-MCP-) speed/max~2764/2969 MHz Kernel~5.7.2-lxpup64 x86_64 Up~1:47 Mem~1039.6/1990.8MB HDD~80.0GB(84.8% used) Procs~176 Client~Shell inxi~2.3.8
/proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 23
model name : Pentium(R) Dual-Core CPU E5700 @ 3.00GHz
stepping : 10
microcode : 0xa07
cpu MHz : 2931.525
cache size : 2048 KB
physical id : 0
siblings : 2
core id : 0
cpu cores : 2
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good nopl cpuid aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm xsave lahf_lm pti tpr_shadow vnmi flexpriority vpid dtherm
vmx flags : vnmi flexpriority tsc_offset vtpr vapic
bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit
bogomips : 6037.59
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
processor : 1
vendor_id : GenuineIntel
cpu family : 6
model : 23
model name : Pentium(R) Dual-Core CPU E5700 @ 3.00GHz
stepping : 10
microcode : 0xa07
cpu MHz : 2912.258
cache size : 2048 KB
physical id : 0
siblings : 2
core id : 1
cpu cores : 2
apicid : 1
initial apicid : 1
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good nopl cpuid aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm xsave lahf_lm pti tpr_shadow vnmi flexpriority vpid dtherm
vmx flags : vnmi flexpriority tsc_offset vtpr vapic
bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit
bogomips : 6037.59
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
/etc/X11/xorg.conf
Section "ServerFlags"
Option "IgnoreABI" "true"
Option "DontVTSwitch" "true"
Option "RandR" "on"
Option "AutoAddDevices" "true"
Option "DontZap" "false"
EndSection
Section "ServerLayout"
Identifier "X.org Configured"
Screen 0 "Screen0" 0 0
#InputDevice "Synaptics Mouse" "AlwaysCore" #serverlayoutsynaptics
# InputDevice "VboxMouse" "CorePointer"
#InputDevice "Mouse0" "CorePointer"
#InputDevice "Keyboard0" "CoreKeyboard"
EndSection
Section "Files"
ModulePath "/usr/lib/X11/modules"
FontPath "/usr/share/fonts/local"
FontPath "/usr/share/fonts/TTF"
FontPath "/usr/share/fonts/OTF"
FontPath "/usr/share/fonts/Type1"
FontPath "/usr/share/fonts/misc"
FontPath "/usr/share/fonts/truetype"
FontPath "/usr/share/fonts/opentype"
FontPath "/usr/share/fonts/woff"
FontPath "/usr/share/fonts/CID"
FontPath "/usr/share/fonts/75dpi/:unscaled"
FontPath "/usr/share/fonts/100dpi/:unscaled"
FontPath "/usr/share/fonts/75dpi"
FontPath "/usr/share/fonts/100dpi"
FontPath "/usr/share/fonts/cyrillic"
FontPath "/usr/share/fonts/X11/misc"
FontPath "/usr/share/X11/fonts/misc"
FontPath "/usr/share/fonts/X11/TTF"
FontPath "/usr/share/fonts/X11/OTF"
FontPath "/usr/share/fonts/X11/Type1"
FontPath "/usr/share/fonts/X11/100dpi"
FontPath "/usr/share/fonts/X11/75dpi"
FontPath "/usr/share/X11/fonts"
FontPath "/usr/local/share/fonts"
FontPath "/usr/local/share/X11/fonts"
EndSection
Section "Module"
Load "synaptics" #loadsynaptics
Load "glx"
EndSection
Section "Monitor"
Identifier "Monitor0"
VendorName "Monitor Vendor"
ModelName "Monitor Model"
HorizSync 35-81
VertRefresh 59-76
#UseModes "Modes0" #monitor0usemodes
# Option "PreferredMode" "1024x768" #monitor0prefmode
EndSection
Section "Modes"
Identifier "Modes0"
#modes0modeline0
EndSection
Section "Device"
Identifier "Card0"
Driver "nouveau" #card0driver
BusID "1:0:0" #card0busid
EndSection
Section "Screen"
Identifier "Screen0"
# Device "Card0"
Monitor "Monitor0"
# DefaultDepth 24
#Option "metamodes" "1280x800_60 +0+0" #METAMODES_0
Subsection "Display"
Depth 24 #screen0depth
Modes "1280x1024" #screen0modes
EndSubsection
EndSection
ls -Rlh /usr/lib/X11/modules
/usr/lib/X11/modules:
total 730K
drwxr-xr-x 2 root root 105 dic 17 14:20 dri
drwxr-xr-x 2 root root 1,3K dic 16 10:43 drivers
drwxr-xr-x 2 root root 32 dic 16 01:28 extensions
drwxr-xr-x 2 root root 474 dic 16 11:00 input
-rw-r--r-- 1 root root 103K dic 16 01:28 libexa.so
-rw-r--r-- 1 root root 23K dic 16 01:28 libfbdevhw.so
-rw-r--r-- 1 root root 217K dic 16 01:28 libglamoregl.so
-rw-r--r-- 1 root root 164K dic 16 01:28 libint10.so
-rw-r--r-- 1 root root 15K dic 16 01:28 libshadowfb.so
-rw-r--r-- 1 root root 39K dic 16 01:28 libshadow.so
-rw-r--r-- 1 root root 36K dic 16 01:28 libvgahw.so
-rw-r--r-- 1 root root 136K dic 16 01:28 libwfb.so
/usr/lib/X11/modules/dri:
total 30M
lrwxrwxrwx 1 root root 17 dic 17 14:20 kms_swrast_dri.so -> libgallium_dri.so
-rw-r--r-- 1 root root 30M sep 25 17:28 libgallium_dri.so
lrwxrwxrwx 1 root root 17 dic 17 14:20 swrast_dri.so -> libgallium_dri.so
lrwxrwxrwx 1 root root 17 dic 17 14:20 zink_dri.so -> libgallium_dri.so
/usr/lib/X11/modules/drivers:
total 0
lrwxrwxrwx 1 root root 43 dic 16 10:43 amdgpu_drv.la -> ../../../xorg/modules/drivers/amdgpu_drv.la
lrwxrwxrwx 1 root root 43 dic 16 10:43 amdgpu_drv.so -> ../../../xorg/modules/drivers/amdgpu_drv.so
lrwxrwxrwx 1 root root 40 dic 16 10:43 apm_drv.la -> ../../../xorg/modules/drivers/apm_drv.la
lrwxrwxrwx 1 root root 40 dic 16 10:43 ark_drv.la -> ../../../xorg/modules/drivers/ark_drv.la
lrwxrwxrwx 1 root root 40 dic 16 10:43 ast_drv.la -> ../../../xorg/modules/drivers/ast_drv.la
lrwxrwxrwx 1 root root 40 dic 16 10:43 ati_drv.la -> ../../../xorg/modules/drivers/ati_drv.la
lrwxrwxrwx 1 root root 40 dic 16 10:43 ati_drv.so -> ../../../xorg/modules/drivers/ati_drv.so
lrwxrwxrwx 1 root root 42 dic 16 10:43 chips_drv.la -> ../../../xorg/modules/drivers/chips_drv.la
lrwxrwxrwx 1 root root 46 dic 16 10:43 cirrus_alpine.la -> ../../../xorg/modules/drivers/cirrus_alpine.la
lrwxrwxrwx 1 root root 43 dic 16 10:43 cirrus_drv.la -> ../../../xorg/modules/drivers/cirrus_drv.la
lrwxrwxrwx 1 root root 46 dic 16 10:43 cirrus_laguna.la -> ../../../xorg/modules/drivers/cirrus_laguna.la
lrwxrwxrwx 1 root root 42 dic 16 10:43 dummy_drv.la -> ../../../xorg/modules/drivers/dummy_drv.la
lrwxrwxrwx 1 root root 42 dic 16 10:43 dummy_drv.so -> ../../../xorg/modules/drivers/dummy_drv.so
lrwxrwxrwx 1 root root 42 dic 16 10:43 fbdev_drv.la -> ../../../xorg/modules/drivers/fbdev_drv.la
lrwxrwxrwx 1 root root 42 dic 16 10:43 fbdev_drv.so -> ../../../xorg/modules/drivers/fbdev_drv.so
lrwxrwxrwx 1 root root 42 dic 16 10:43 geode_drv.la -> ../../../xorg/modules/drivers/geode_drv.la
lrwxrwxrwx 1 root root 42 dic 16 10:43 glint_drv.la -> ../../../xorg/modules/drivers/glint_drv.la
lrwxrwxrwx 1 root root 41 dic 16 10:43 i128_drv.la -> ../../../xorg/modules/drivers/i128_drv.la
lrwxrwxrwx 1 root root 41 dic 16 10:43 i740_drv.la -> ../../../xorg/modules/drivers/i740_drv.la
lrwxrwxrwx 1 root root 42 dic 16 10:43 intel_drv.la -> ../../../xorg/modules/drivers/intel_drv.la
lrwxrwxrwx 1 root root 42 dic 16 10:43 intel_drv.so -> ../../../xorg/modules/drivers/intel_drv.so
lrwxrwxrwx 1 root root 43 dic 16 10:43 mach64_drv.la -> ../../../xorg/modules/drivers/mach64_drv.la
lrwxrwxrwx 1 root root 40 dic 16 10:43 mga_drv.la -> ../../../xorg/modules/drivers/mga_drv.la
lrwxrwxrwx 1 root root 48 dic 16 10:43 modesetting_drv.la -> ../../../xorg/modules/drivers/modesetting_drv.la
lrwxrwxrwx 1 root root 48 dic 16 10:43 modesetting_drv.so -> ../../../xorg/modules/drivers/modesetting_drv.so
lrwxrwxrwx 1 root root 45 dic 16 10:43 neomagic_drv.la -> ../../../xorg/modules/drivers/neomagic_drv.la
lrwxrwxrwx 1 root root 44 dic 16 10:43 nouveau_drv.la -> ../../../xorg/modules/drivers/nouveau_drv.la
lrwxrwxrwx 1 root root 44 dic 16 10:43 nouveau_drv.so -> ../../../xorg/modules/drivers/nouveau_drv.so
lrwxrwxrwx 1 root root 39 dic 16 10:43 nv_drv.la -> ../../../xorg/modules/drivers/nv_drv.la
lrwxrwxrwx 1 root root 47 dic 16 10:43 openchrome_drv.la -> ../../../xorg/modules/drivers/openchrome_drv.la
lrwxrwxrwx 1 root root 47 dic 16 10:43 openchrome_drv.so -> ../../../xorg/modules/drivers/openchrome_drv.so
lrwxrwxrwx 1 root root 40 dic 16 10:43 qxl_drv.so -> ../../../xorg/modules/drivers/qxl_drv.so
lrwxrwxrwx 1 root root 41 dic 16 10:43 r128_drv.la -> ../../../xorg/modules/drivers/r128_drv.la
lrwxrwxrwx 1 root root 43 dic 16 10:43 radeon_drv.la -> ../../../xorg/modules/drivers/radeon_drv.la
lrwxrwxrwx 1 root root 43 dic 16 10:43 radeon_drv.so -> ../../../xorg/modules/drivers/radeon_drv.so
lrwxrwxrwx 1 root root 33 dic 16 10:43 rdp -> ../../../xorg/modules/drivers/rdp
lrwxrwxrwx 1 root root 46 dic 16 10:43 rendition_drv.la -> ../../../xorg/modules/drivers/rendition_drv.la
lrwxrwxrwx 1 root root 39 dic 16 10:43 s3_drv.la -> ../../../xorg/modules/drivers/s3_drv.la
lrwxrwxrwx 1 root root 44 dic 16 10:43 s3virge_drv.la -> ../../../xorg/modules/drivers/s3virge_drv.la
lrwxrwxrwx 1 root root 43 dic 16 10:43 savage_drv.la -> ../../../xorg/modules/drivers/savage_drv.la
lrwxrwxrwx 1 root root 50 dic 16 10:43 siliconmotion_drv.la -> ../../../xorg/modules/drivers/siliconmotion_drv.la
lrwxrwxrwx 1 root root 40 dic 16 10:43 sis_drv.la -> ../../../xorg/modules/drivers/sis_drv.la
lrwxrwxrwx 1 root root 43 dic 16 10:43 sisusb_drv.la -> ../../../xorg/modules/drivers/sisusb_drv.la
lrwxrwxrwx 1 root root 43 dic 16 10:43 sisusb_drv.so -> ../../../xorg/modules/drivers/sisusb_drv.so
lrwxrwxrwx 1 root root 45 dic 16 10:43 spiceqxl_drv.so -> ../../../xorg/modules/drivers/spiceqxl_drv.so
lrwxrwxrwx 1 root root 41 dic 16 10:43 tdfx_drv.la -> ../../../xorg/modules/drivers/tdfx_drv.la
lrwxrwxrwx 1 root root 40 dic 16 10:43 tga_drv.la -> ../../../xorg/modules/drivers/tga_drv.la
lrwxrwxrwx 1 root root 44 dic 16 10:43 trident_drv.la -> ../../../xorg/modules/drivers/trident_drv.la
lrwxrwxrwx 1 root root 42 dic 16 10:43 tseng_drv.la -> ../../../xorg/modules/drivers/tseng_drv.la
lrwxrwxrwx 1 root root 40 dic 16 10:43 v4l_drv.la -> ../../../xorg/modules/drivers/v4l_drv.la
lrwxrwxrwx 1 root root 46 dic 16 10:43 vboxvideo_drv.la -> ../../../xorg/modules/drivers/vboxvideo_drv.la
lrwxrwxrwx 1 root root 41 dic 16 10:43 vesa_drv.la -> ../../../xorg/modules/drivers/vesa_drv.la
lrwxrwxrwx 1 root root 41 dic 16 10:43 vesa_drv.so -> ../../../xorg/modules/drivers/vesa_drv.so
lrwxrwxrwx 1 root root 43 dic 16 10:43 vmware_drv.la -> ../../../xorg/modules/drivers/vmware_drv.la
lrwxrwxrwx 1 root root 43 dic 16 10:43 vmware_drv.so -> ../../../xorg/modules/drivers/vmware_drv.so
lrwxrwxrwx 1 root root 43 dic 16 10:43 voodoo_drv.la -> ../../../xorg/modules/drivers/voodoo_drv.la
lrwxrwxrwx 1 root root 43 dic 16 10:43 voodoo_drv.so -> ../../../xorg/modules/drivers/voodoo_drv.so
lrwxrwxrwx 1 root root 43 dic 16 10:43 xrdpdev_drv.a -> ../../../xorg/modules/drivers/xrdpdev_drv.a
lrwxrwxrwx 1 root root 44 dic 16 10:43 xrdpdev_drv.so -> ../../../xorg/modules/drivers/xrdpdev_drv.so
lrwxrwxrwx 1 root root 40 dic 16 10:43 ztv_drv.la -> ../../../xorg/modules/drivers/ztv_drv.la
/usr/lib/X11/modules/extensions:
total 292K
-rw-r--r-- 1 root root 292K dic 16 01:28 libglx.so
/usr/lib/X11/modules/input:
total 0
lrwxrwxrwx 1 root root 48 dic 16 11:00 acecad_drv.la -> ../../../../lib/xorg/modules/input/acecad_drv.la
lrwxrwxrwx 1 root root 53 dic 16 11:00 elographics_drv.so -> ../../../../lib/xorg/modules/input/elographics_drv.so
lrwxrwxrwx 1 root root 47 dic 16 11:00 evdev_drv.so -> ../../../../lib/xorg/modules/input/evdev_drv.so
lrwxrwxrwx 1 root root 51 dic 16 11:00 inputtest_drv.so -> ../../../../lib/xorg/modules/input/inputtest_drv.so
lrwxrwxrwx 1 root root 50 dic 16 11:00 joystick_drv.la -> ../../../../lib/xorg/modules/input/joystick_drv.la
lrwxrwxrwx 1 root root 45 dic 16 11:00 kbd_drv.la -> ../../../../lib/xorg/modules/input/kbd_drv.la
lrwxrwxrwx 1 root root 50 dic 16 11:00 libinput_drv.la -> ../../../../lib/xorg/modules/input/libinput_drv.la
lrwxrwxrwx 1 root root 50 dic 16 11:00 libinput_drv.so -> ../../../../lib/xorg/modules/input/libinput_drv.so
lrwxrwxrwx 1 root root 47 dic 16 11:00 mouse_drv.la -> ../../../../lib/xorg/modules/input/mouse_drv.la
lrwxrwxrwx 1 root root 50 dic 16 11:00 penmount_drv.la -> ../../../../lib/xorg/modules/input/penmount_drv.la
lrwxrwxrwx 1 root root 51 dic 16 11:00 synaptics_drv.so -> ../../../../lib/xorg/modules/input/synaptics_drv.so
lrwxrwxrwx 1 root root 49 dic 16 11:00 vmmouse_drv.la -> ../../../../lib/xorg/modules/input/vmmouse_drv.la
lrwxrwxrwx 1 root root 49 dic 16 11:00 vmmouse_drv.so -> ../../../../lib/xorg/modules/input/vmmouse_drv.so
lrwxrwxrwx 1 root root 46 dic 16 11:00 void_drv.la -> ../../../../lib/xorg/modules/input/void_drv.la
lrwxrwxrwx 1 root root 46 dic 16 11:00 void_drv.so -> ../../../../lib/xorg/modules/input/void_drv.so
lrwxrwxrwx 1 root root 47 dic 16 11:00 wacom_drv.la -> ../../../../lib/xorg/modules/input/wacom_drv.la
lrwxrwxrwx 1 root root 47 dic 16 11:00 wacom_drv.so -> ../../../../lib/xorg/modules/input/wacom_drv.so
lrwxrwxrwx 1 root root 49 dic 16 11:00 xrdpkeyb_drv.a -> ../../../../lib/xorg/modules/input/xrdpkeyb_drv.a
lrwxrwxrwx 1 root root 50 dic 16 11:00 xrdpkeyb_drv.so -> ../../../../lib/xorg/modules/input/xrdpkeyb_drv.so
lrwxrwxrwx 1 root root 50 dic 16 11:00 xrdpmouse_drv.a -> ../../../../lib/xorg/modules/input/xrdpmouse_drv.a
lrwxrwxrwx 1 root root 51 dic 16 11:00 xrdpmouse_drv.so -> ../../../../lib/xorg/modules/input/xrdpmouse_drv.so
ls -Rlh /usr/lib/xorg
/usr/lib/xorg:
total 30K
drwxr-xr-x 6 root root 4,0K dic 23 10:53 modules
-rw-r--r-- 1 root root 26K dic 13 00:13 protocol.txt
/usr/lib/xorg/modules:
total 1,1M
drwxr-xr-x 3 root root 1,3K dic 16 09:35 drivers
drwxr-xr-x 2 root root 49 dic 16 00:44 extensions
drwxr-xr-x 2 root root 4,0K dic 23 10:54 input
-rwxrwxrwx 1 root root 923 may 1 2020 libexa.la
-rwxr-xr-x 1 root root 99K dic 13 00:13 libexa.so
-rwxrwxrwx 1 root root 936 may 1 2020 libfbdevhw.la
-rwxr-xr-x 1 root root 23K dic 13 00:13 libfbdevhw.so
-rwxrwxrwx 1 root root 917 may 1 2020 libfb.la
-rwxrwxrwx 1 root root 961 may 1 2020 libglamoregl.la
-rwxr-xr-x 1 root root 225K dic 13 00:13 libglamoregl.so
-rwxrwxrwx 1 root root 935 may 1 2020 libint10.la
-rwxr-xr-x 1 root root 164K dic 13 00:13 libint10.so
-rwxrwxrwx 1 root root 953 may 1 2020 libshadowfb.la
-rwxr-xr-x 1 root root 15K dic 13 00:13 libshadowfb.so
-rwxrwxrwx 1 root root 930 may 1 2020 libshadow.la
-rwxr-xr-x 1 root root 39K dic 13 00:13 libshadow.so
-rwxrwxrwx 1 root root 912 may 1 2020 libvbe.la
-rwxrwxrwx 1 root root 935 may 1 2020 libvgahw.la
-rwxr-xr-x 1 root root 40K dic 13 00:13 libvgahw.so
-rwxrwxrwx 1 root root 923 may 1 2020 libwfb.la
-rwxr-xr-x 1 root root 140K dic 13 00:13 libwfb.so
-rw-r--r-- 1 root root 214K dic 16 09:37 libxorgxrdp.a
-rwxr-xr-x 1 root root 103K dic 16 09:37 libxorgxrdp.so
/usr/lib/xorg/modules/drivers:
total 3,7M
-rwxrwxrwx 1 root root 965 oct 11 2019 amdgpu_drv.la
-rwxr-xr-x 1 root root 158K feb 22 2023 amdgpu_drv.so
-rwxrwxrwx 1 root root 930 feb 10 2019 apm_drv.la
-rwxrwxrwx 1 root root 914 may 11 2018 ark_drv.la
-rwxrwxrwx 1 root root 914 may 11 2018 ast_drv.la
-rwxrwxrwx 1 root root 933 oct 15 2019 ati_drv.la
-rwxr-xr-x 1 root root 15K abr 25 2023 ati_drv.so
-rwxrwxrwx 1 root root 942 feb 16 2019 chips_drv.la
-rwxrwxrwx 1 root root 968 may 11 2018 cirrus_alpine.la
-rwxrwxrwx 1 root root 950 may 11 2018 cirrus_drv.la
-rwxrwxrwx 1 root root 968 may 11 2018 cirrus_laguna.la
-rwxrwxrwx 1 root root 940 may 11 2018 dummy_drv.la
-rwxr-xr-x 1 root root 23K may 11 2023 dummy_drv.so
-rwxrwxrwx 1 root root 925 jun 1 2018 fbdev_drv.la
-rwxr-xr-x 1 root root 27K nov 7 2021 fbdev_drv.so
-rwxrwxrwx 1 root root 941 sep 21 2019 geode_drv.la
-rwxrwxrwx 1 root root 925 may 11 2018 glint_drv.la
-rwxrwxrwx 1 root root 936 dic 12 2018 i128_drv.la
-rwxrwxrwx 1 root root 936 dic 7 2018 i740_drv.la
-rwxrwxrwx 1 root root 986 ene 19 2020 intel_drv.la
-rwxr-xr-x 1 root root 1,7M feb 2 2023 intel_drv.so
-rwxrwxrwx 1 root root 931 may 19 2018 mach64_drv.la
-rwxrwxrwx 1 root root 930 dic 12 2018 mga_drv.la
-rwxrwxrwx 1 root root 981 may 1 2020 modesetting_drv.la
-rwxr-xr-x 1 root root 117K dic 13 00:13 modesetting_drv.so
-rwxrwxrwx 1 root root 960 dic 27 2018 neomagic_drv.la
-rwxrwxrwx 1 root root 964 ene 29 2019 nouveau_drv.la
-rwxr-xr-x 1 root root 221K dic 16 11:28 nouveau_drv.so
-rwxrwxrwx 1 root root 907 may 11 2018 nv_drv.la
-rwxrwxrwx 1 root root 985 may 11 2018 openchrome_drv.la
-rwxr-xr-x 1 root root 244K abr 1 2023 openchrome_drv.so
-rwxr-xr-x 1 root root 165K may 16 2023 qxl_drv.so
-rwxrwxrwx 1 root root 936 oct 23 2018 r128_drv.la
-rwxrwxrwx 1 root root 965 oct 15 2019 radeon_drv.la
-rwxr-xr-x 1 root root 483K abr 25 2023 radeon_drv.so
drwxr-xr-x 3 root root 130 dic 16 09:35 rdp
-rwxrwxrwx 1 root root 949 may 19 2018 rendition_drv.la
-rwxrwxrwx 1 root root 924 jul 26 2019 s3_drv.la
-rwxrwxrwx 1 root root 954 feb 10 2019 s3virge_drv.la
-rwxrwxrwx 1 root root 931 mar 17 2019 savage_drv.la
-rwxrwxrwx 1 root root 973 may 11 2018 siliconmotion_drv.la
-rwxrwxrwx 1 root root 928 dic 3 2019 sis_drv.la
-rwxrwxrwx 1 root root 931 may 11 2018 sisusb_drv.la
-rwxr-xr-x 1 root root 84K nov 7 2021 sisusb_drv.so
-rwxr-xr-x 1 root root 190K may 16 2023 spiceqxl_drv.so
-rwxrwxrwx 1 root root 936 feb 16 2019 tdfx_drv.la
-rwxrwxrwx 1 root root 914 may 11 2018 tga_drv.la
-rwxrwxrwx 1 root root 941 may 11 2018 trident_drv.la
-rwxrwxrwx 1 root root 926 jul 14 2018 tseng_drv.la
-rwxrwxrwx 1 root root 913 ago 17 2018 v4l_drv.la
-rwxrwxrwx 1 root root 964 may 11 2018 vboxvideo_drv.la
-rwxrwxrwx 1 root root 919 may 11 2018 vesa_drv.la
-rwxr-xr-x 1 root root 32K dic 10 2022 vesa_drv.so
-rwxrwxrwx 1 root root 956 sep 30 2019 vmware_drv.la
-rwxr-xr-x 1 root root 175K ene 24 2023 vmware_drv.so
-rwxrwxrwx 1 root root 932 may 11 2018 voodoo_drv.la
-rwxr-xr-x 1 root root 27K dic 10 2022 voodoo_drv.so
-rw-r--r-- 1 root root 23K dic 16 09:37 xrdpdev_drv.a
-rwxr-xr-x 1 root root 23K dic 16 09:37 xrdpdev_drv.so
-rwxrwxrwx 1 root root 929 sep 21 2019 ztv_drv.la
/usr/lib/xorg/modules/drivers/rdp:
total 99K
drwxr-xr-x 5 root root 64 dic 16 09:35 rdp
-rw-r--r-- 1 root root 99K feb 21 2023 xorgxrdp-0.9.19-2-x86_64.pkg.tar.zst
/usr/lib/xorg/modules/drivers/rdp/rdp:
total 0
drwxr-xr-x 3 root root 26 feb 21 2023 etc
drwxr-xr-x 3 root root 26 dic 16 09:35 install
drwxr-xr-x 4 root root 51 feb 21 2023 usr
/usr/lib/xorg/modules/drivers/rdp/rdp/etc:
total 0
drwxr-xr-x 3 root root 27 feb 21 2023 X11
/usr/lib/xorg/modules/drivers/rdp/rdp/etc/X11:
total 0
drwxr-xr-x 2 root root 32 feb 21 2023 xrdp
/usr/lib/xorg/modules/drivers/rdp/rdp/etc/X11/xrdp:
total 2,0K
-rw-r--r-- 1 root root 1,7K feb 21 2023 xorg.conf
/usr/lib/xorg/modules/drivers/rdp/rdp/install:
total 0
drwxr-xr-x 2 root root 48 dic 16 09:35 rdp
/usr/lib/xorg/modules/drivers/rdp/rdp/install/rdp:
total 2,5K
-rwxr-xr-x 1 root root 163 dic 16 09:35 in2.sh
-rw-r--r-- 1 root root 1,7K dic 16 09:35 rbo-rdp.txt
/usr/lib/xorg/modules/drivers/rdp/rdp/usr:
total 0
drwxr-xr-x 3 root root 27 feb 21 2023 lib
drwxr-xr-x 3 root root 31 feb 21 2023 share
/usr/lib/xorg/modules/drivers/rdp/rdp/usr/lib:
total 0
drwxr-xr-x 3 root root 30 feb 21 2023 xorg
/usr/lib/xorg/modules/drivers/rdp/rdp/usr/lib/xorg:
total 0
drwxr-xr-x 4 root root 86 feb 21 2023 modules
/usr/lib/xorg/modules/drivers/rdp/rdp/usr/lib/xorg/modules:
total 317K
drwxr-xr-x 2 root root 58 feb 21 2023 drivers
drwxr-xr-x 2 root root 107 feb 21 2023 input
-rw-r--r-- 1 root root 214K feb 21 2023 libxorgxrdp.a
-rwxr-xr-x 1 root root 103K feb 21 2023 libxorgxrdp.so
/usr/lib/xorg/modules/drivers/rdp/rdp/usr/lib/xorg/modules/drivers:
total 46K
-rw-r--r-- 1 root root 23K feb 21 2023 xrdpdev_drv.a
-rwxr-xr-x 1 root root 23K feb 21 2023 xrdpdev_drv.so
/usr/lib/xorg/modules/drivers/rdp/rdp/usr/lib/xorg/modules/input:
total 54K
-rw-r--r-- 1 root root 17K feb 21 2023 xrdpkeyb_drv.a
-rwxr-xr-x 1 root root 14K feb 21 2023 xrdpkeyb_drv.so
-rw-r--r-- 1 root root 8,5K feb 21 2023 xrdpmouse_drv.a
-rwxr-xr-x 1 root root 14K feb 21 2023 xrdpmouse_drv.so
/usr/lib/xorg/modules/drivers/rdp/rdp/usr/share:
total 0
drwxr-xr-x 3 root root 31 feb 21 2023 licenses
/usr/lib/xorg/modules/drivers/rdp/rdp/usr/share/licenses:
total 0
drwxr-xr-x 2 root root 30 feb 21 2023 xorgxrdp
/usr/lib/xorg/modules/drivers/rdp/rdp/usr/share/licenses/xorgxrdp:
total 1,0K
-rw-r--r-- 1 root root 967 feb 21 2023 COPYING
/usr/lib/xorg/modules/extensions:
total 309K
-rwxrwxrwx 1 root root 933 may 1 2020 libglx.la
-rwxr-xr-x 1 root root 308K dic 13 00:13 libglx.so
/usr/lib/xorg/modules/input:
total 498K
-rwxrwxrwx 1 root root 949 may 11 2018 acecad_drv.la
-rwxr-xr-x 1 root root 27K dic 9 2022 elographics_drv.so
-rwxr-xr-x 1 root root 67K nov 7 2021 evdev_drv.so
-rwxr-xr-x 1 root root 27K dic 13 00:13 inputtest_drv.so
-rwxrwxrwx 1 root root 941 may 11 2018 joystick_drv.la
-rwxrwxrwx 1 root root 911 may 11 2018 kbd_drv.la
-rwxrwxrwx 1 root root 953 may 19 2020 libinput_drv.la
-rwxr-xr-x 1 root root 87K ago 25 02:57 libinput_drv.so
-rwxrwxrwx 1 root root 923 jun 19 2018 mouse_drv.la
-rwxrwxrwx 1 root root 940 may 11 2018 penmount_drv.la
-rwxr-xr-x 1 root root 75K jul 11 2022 synaptics_drv.so
-rwxrwxrwx 1 root root 936 may 11 2018 vmmouse_drv.la
-rwxr-xr-x 1 root root 23K oct 8 2022 vmmouse_drv.so
-rwxrwxrwx 1 root root 918 dic 13 2018 void_drv.la
-rwxr-xr-x 1 root root 14K nov 7 2022 void_drv.so
-rwxrwxrwx 1 root root 936 dic 24 2019 wacom_drv.la
-rwxr-xr-x 1 root root 116K abr 6 2023 wacom_drv.so
-rw-r--r-- 1 root root 17K dic 16 09:37 xrdpkeyb_drv.a
-rwxr-xr-x 1 root root 14K dic 16 09:37 xrdpkeyb_drv.so
-rw-r--r-- 1 root root 8,5K dic 16 09:37 xrdpmouse_drv.a
-rwxr-xr-x 1 root root 14K dic 16 09:37 xrdpmouse_drv.so
Part of /usr/lib/xorg/protocol.txt
// ...
R000 DRI2:QueryVersion
R001 DRI2:Connect
R002 DRI2:Authenticate
R003 DRI2:CreateDrawable
R004 DRI2:DestroyDrawable
R005 DRI2:GetBuffers
R006 DRI2:CopyRegion
R007 DRI2:GetBuffersWithFormat
R008 DRI2:SwapBuffers
R009 DRI2:GetMSC
R010 DRI2:WaitMSC
R011 DRI2:WaitSBC
R012 DRI2:SwapInterval
V000 DRI2:BufferSwapComplete
V001 DRI2:InvalidateBuffers
R000 DRI3:QueryVersion
R001 DRI3:Open
R002 DRI3:PixmapFromBuffer
R003 DRI3:BufferFromPixmap
R004 DRI3:FenceFromFD
R005 DRI3:FDFromFence
// ...
R001 GLX:
R002 GLX:Large
R003 GLX:CreateContext
R004 GLX:DestroyContext
R005 GLX:MakeCurrent
R006 GLX:IsDirect
R007 GLX:QueryVersion
R008 GLX:WaitGL
R009 GLX:WaitX
R010 GLX:CopyContext
R011 GLX:SwapBuffers
R012 GLX:UseXFont
R013 GLX:CreateGLXPixmap
R014 GLX:GetVisualConfigs
R015 GLX:DestroyGLXPixmap
R016 GLX:VendorPrivate
R017 GLX:VendorPrivateWithReply
R018 GLX:QueryExtensionsString
R019 GLX:QueryServerString
R020 GLX:ClientInfo
R021 GLX:GetFBConfigs
R022 GLX:CreatePixmap
R023 GLX:DestroyPixmap
R024 GLX:CreateNewContext
R025 GLX:QueryContext
R026 GLX:MakeContextCurrent
R027 GLX:CreatePbuffer
R028 GLX:DestroyPbuffer
R029 GLX:GetDrawableAttributes
R030 GLX:ChangeDrawableAttributes
R031 GLX:CreateWindow
R032 GLX:DeleteWindow
R033 GLX:SetClientInfoARB
R034 GLX:CreateContextAttribsARB
R035 GLX:SetClientInfo2ARB
R101 GLX:NewList
R102 GLX:EndList
R103 GLX:DeleteLists
R104 GLX:GenLists
R105 GLX:FeedbackBuffer
R106 GLX:SelectBuffer
R107 GLX:Mode
R108 GLX:Finish
R109 GLX:PixelStoref
R110 GLX:PixelStorei
R111 GLX:ReadPixels
R112 GLX:GetBooleanv
R113 GLX:GetClipPlane
R114 GLX:GetDoublev
R115 GLX:GetError
R116 GLX:GetFloatv
R117 GLX:GetIntegerv
R118 GLX:GetLightfv
R119 GLX:GetLightiv
R120 GLX:GetMapdv
R121 GLX:GetMapfv
R122 GLX:GetMapiv
R123 GLX:GetMaterialfv
R124 GLX:GetMaterialiv
R125 GLX:GetPixelfv
R126 GLX:GetPixelMapuiv
R127 GLX:GetPixelMapusv
R128 GLX:GetPolygonStipple
R129 GLX:GetString
R130 GLX:GetTexEnvfv
R131 GLX:GetTexEnviv
R132 GLX:GetTexGendv
R133 GLX:GetTexGenfv
R134 GLX:GetTexGeniv
R135 GLX:GetTexImage
R136 GLX:GetTexParameterfv
R137 GLX:GetTexParameteriv
R138 GLX:GetTexLevelParameterfv
R139 GLX:GetTexLevelParameteriv
R140 GLX:IsEnabled
R141 GLX:IsList
R142 GLX:Flush
R143 GLX:AreTexturesResident
R144 GLX:DeleteTextures
R145 GLX:GenTextures
R146 GLX:IsTexture
R147 GLX:GetColorTable
R148 GLX:GetColorTableParameterfv
R149 GLX:GetColorTableParameterfv
R150 GLX:GetConvolutionFilter
R151 GLX:GetConvolutionParameterfv
R152 GLX:GetConvolutionParameteriv
R153 GLX:GetSeparableFilter
R154 GLX:GetHistogram
R155 GLX:GetHistogramParameterfv
R156 GLX:GetHistogramParameteriv
R157 GLX:GetMinmax
R158 GLX:GetMinmaxParameterfv
R159 GLX:GetMinmaxParameteriv
R160 GLX:GetCompressedTexImage
V000 GLX:PbufferClobber
V001 GLX:BufferSwapComplete
E000 GLX:BadContext
E001 GLX:BadContextState
E002 GLX:BadDrawable
E003 GLX:BadPixmap
E004 GLX:BadContextTag
E005 GLX:BadCurrentWindow
E006 GLX:BadRenderRequest
E007 GLX:BadLargeRequest
E008 GLX:UnsupportedPrivateRequest
E009 GLX:BadFBConfig
E010 GLX:BadPbuffer
E011 GLX:BadCurrentDrawable
E012 GLX:BadWindow
// ...
|
To solve this you have to look at
cat /var/log/Xorg.0.log | grep "(EE)"
In this case, what was necessary was to install these three packages:
llvm 11: https://distrib-coffee.ipsl.jussieu.fr/pub/linux/altlinux/p10/branch/x86_64/RPMS.classic/llvm11.0-libs-11.0.1-alt6.x86_64.rpm
xorg-dri-nouveau: https://distrib-coffee.ipsl.jussieu.fr/pub/linux/altlinux/Sisyphus/x86_64/RPMS.classic/xorg-dri-nouveau-23.3.1-alt1.x86_64.rpm
libffi7: https://distrib-coffee.ipsl.jussieu.fr/pub/linux/altlinux/Sisyphus/x86_64/RPMS.classic/libffi7-3.3-alt2.x86_64.rpm
Then make symbolic links:
cd /usr/lib/X11/modules/dri
ln -sv ../../../xorg/modules/drivers/* .
Also before installing the packages, you must extract it and move the lib64 folders to lib.
| GL opens in TTY terminal, but not if X is present |
1,373,311,224,000 |
I recently bought a new Lenovo Ideapad Slim 3 laptop and I am having problems getting the amdgpu driver working properly in Arch Linux. The GPU seems to work fine straight away with a Mint live USB (glxgears plays, etc.); however, in the Arch system I am trying to install to the SSD, I get this error with glxinfo -B:
$ glxinfo -B
name of display: :0
X Error of failed request: BadValue (integer parameter out of range for operation)
Major opcode of failed request: 151 (GLX)
Minor opcode of failed request: 24 (X_GLXCreateNewContext)
Value in failed request: 0x0
Serial number of failed request: 37
Current serial number in output stream: 38
I can't see any errors in the dmesg log relating to [drm] and amdgpu - the messages in Arch seem very similar to those in Mint. However, in Arch I see the following error in my Xorg.0.log file:
[ 42.568] (II) Loading sub module "glamoregl"
[ 42.568] (II) LoadModule: "glamoregl"
[ 42.568] (II) Loading /usr/lib/xorg/modules/libglamoregl.so
[ 42.572] (II) Module glamoregl: vendor="X.Org Foundation"
[ 42.572] compiled for 1.21.1.8, module version = 1.0.1
[ 42.572] ABI class: X.Org ANSI C Emulation, version 0.4
[ 42.577] (EE) AMDGPU(0): eglGetDisplay() failed
[ 42.577] (EE) AMDGPU(0): glamor detected, failed to initialize EGL.
[ 42.577] (WW) AMDGPU(0): amdgpu_glamor_pre_init returned FALSE, using ShadowFB
The Xorg.0.log file for the Mint live USB doesn't show this error:
[ 17.992] (II) Loading sub module "glamoregl"
[ 17.992] (II) LoadModule: "glamoregl"
[ 17.992] (II) Loading /usr/lib/xorg/modules/libglamoregl.so
[ 17.995] (II) Module glamoregl: vendor="X.Org Foundation"
[ 17.995] compiled for 1.21.1.3, module version = 1.0.1
[ 17.995] ABI class: X.Org ANSI C Emulation, version 0.4
[ 18.028] (II) AMDGPU(0): glamor X acceleration enabled on AMD RENOIR (LLVM 13.0.1, DRM 3.42, 5.15.0-56-generic)
[ 18.028] (II) AMDGPU(0): glamor detected, initialising EGL layer.
It seems likely this error is related to what is causing the problem. Does anyone know what might be causing this issue between amdgpu and glamor in Arch?
It's a brand new laptop, with an AMD Ryzen 5 7530U CPU, with integrated Radeon graphics.
|
I have managed to resolve the issue myself. The Arch system was copied over from a previous machine that had an NVidia GPU, and there were some graphics packages still installed relating to NVidia. I removed those and now the 3D acceleration seems to be working properly (both glxinfo and glxgears work). The old NVidia packages were:
nvidia-340xx-dkms
nvidia-340xx-utils
opencl-nvidia-340xx
xf86-video-nouveau
ffnvcodec-headers
Hopefully this info might help, if someone else has a similar problem in future.
| Problem with AMD integrated gpu on new Lenovo laptop |
1,656,029,979,000 |
The Debian package libreoffice-core (which is described in the Debian repositories as containing " the architecture-dependent core files of LibreOffice," and which is itself a dependency for libreoffice-writer and similar packages) has an absolute dependency (i.e., the relationship of the packages is depends, not recommends or suggests) on libldap-2.4-2 (described as "the run-time libraries for the OpenLDAP (Lightweight Directory Access Protocol) servers and clients").
Why? How is a word processor whose most common use case by far is editing files stored locally, on the same machine it is running on, so dependent on a protocol for accessing remote directories that it cannot even be configured if the latter is not present? Is this just a dependency classification error (i.e., the relationship should actually be recommends or suggests), or does libreoffice actually somehow need OpenLDAP installed in order to function?
|
libreoffice-core ships /usr/lib/libreoffice/program/soffice.bin, and that is linked against
libldap_r-2.4.so.2 => /usr/lib/x86_64-linux-gnu/libldap_r-2.4.so.2 (0x00007f55a8c9e000)
The package build tools therefore automatically add a dependency on the package providing that library, libldap-2.4-2. It’s a strong dependency because without it, LibreOffice as built in Debian simply wouldn’t start.
Of course LibreOffice could be changed to support dynamically loading LDAP support as needed, but that’s a rather invasive change to make in a package. Another option would be to build it without LDAP support, but some people do actually need it, e.g. to access shared address books, which Writer can use for mail-merges among other things.
Presumably the package maintainer chose to provide LDAP-based features for everyone, instead of introducing complexity in order to allow users to choose. The LDAP library adds less than a megabyte of dependencies, which is a very small amount compared to LibreOffice as a whole.
| Why does LibreOffice (at least as packaged for Debian) depend on libldap? |
1,656,029,979,000 |
I'm using openldap-server-2.4.38_1 on FreeBSD 9.1-RELEASE-p5.
1) can I get list of active (connected) schemes without viewing slapd.conf file?
2) how can I get description of obectClasses and/or it's attributes in this schemes whiout viewing scheme file?
So - it there any pre-builded utils (I mean utils like ldapsearch etc), or - something external scripts like ldapscripts? Or may be I can obtain this info directly from ldapshell or phpldapadmin (but I don't like use utils with web-interface...)?
|
Yes to both.
ldapsearch -H ldap://ldap.mydomain.com -x -s base -b "" +
# the + returns operational attributes
will give a list of supported features. You may want to look up the meaning of the IOD's that get returned here.
More interesting stuff is in the cn=Subschema section:
ldapsearch -H ldap://ldap.mydomain.com -x -s base -b "cn=subschema" objectclasses
which will list all supported ObjectClasses.
Note that unlike other LDAP servers you can't use LDAP commands to extend the scheme in the live server and must edit the files and restart your openldap server to modify the schema.
| How can I list active schemes, classes etc? |
1,656,029,979,000 |
I am trying to setup backup and restore and make sure it works.
Please note that database size on ldap.old is lot more then the ldap. The /var/lib/ldap.old is my existing database. I have renamed /var/lib/ldap for backup/restore testing.
I am getting the following error when restoring. Because of this I am not sure I have successfully restored everything.
...
added: "uid=user11123,ou=Abcd,ou=Industry Professional,dc=testdomain,dc=org" (0001cc9f)
added: "uid=user13123,ou=Abcd,ou=Industry Professional,dc=testdomain,dc=org" (0001cca0)
Error, entries missing!
entry 79870: ou=industryprofessional,dc=testdomain,dc=org
entry 79871: ou=abcd professional,ou=industryprofessional,dc=testdomain,dc=org
Disk usage:
[root@openldap]# du -khs ldap ldap.old/
3.3G ldap
4.0G ldap.old/
Here is my backup / restore process:
Backup:
slapcat -v -l backup_openldap.ldif
Restore:
/etc/init.d/ldap stop
mv /var/lib/ldap /var/lib/ldap.old
mkdir /var/lib/ldap
chmod go-rwx /var/lib/ldap
cp –rfp /var/lib/ldap.old/DB_CONFIG /var/lib/ldap
slapadd –v –l backup_openldap.ldif
chown ldap:ldap /var/lib/ldap
/etc/init.d/ldap start
How do I validate that I have restore all records successfully?
|
First of all you should be aware of slapcat's limitations:
For some backend types, your slapd(8) should not be running
(at least, not in read-write mode) when you do this to ensure
consistency of the database. It is always safe to run slapcat
with the slapd-bdb(5), slapd-hdb(5), and slapd-null(5) backends.
So you better pack that backup in /etc/init.d/ldap stop and /etc/init.d/ldap start as well.
Before restarting ldap in the restore procedure, you can dump the just loaded data to a temporary file and compare that to the LDIF file you just used as input. I am pretty sure the LDIF output for slapcat is sorted by Distinguished Names so a diff should exit with exit-code 0.
...
chown ldap:ldap *
slapcat -l /var/tmp/test.ldif
diff /var/tmp/test.ldif /backup/openldap/backup_ldap2.diff
if [ $? != 0 ] ; then
echo 'differences found'
fi
/etc/init.d/ldap start
This of course assumes that slapcat is working correctly. If you do not trust that you should extract all data relevant to you, from the running DB with ldap_search_ext(), generate some output (dump or checksum) from that, and compare that with running the same code on the restored database (after starting ldap of course). That way you would notice if some data relevant to your usage is left out of the dump by slapcat (unlikely, but possible if it has a bug)
| Openldap backup restore |
1,656,029,979,000 |
I have root access to a RHEL6 system and I want to use the corporate ldap server where I work for user authentication.
I ran authconfig-tui and checked [*] Use LDAP and left [*] Use Shadow Passwords checked, then I checked [*] Use LDAP Authentication then click the Next button and left [ ] Use TLS unchecked and set Server: ldap://ldap.mycompanysdomainname.com and set the Base DN to what the ldap admin guy told me to use.
But I can only log in with the password I set locally on the box for my user account and cannot log on if I use that password that is stored on the ldap server.
Isn't there something I need to do in the /etc/passwd file of the /etc/shadow files ... something like change the passwd field to [email protected] or something like that?
|
Take a look at these documents from Red Hat. They show how you can change your system so that it will authenticate to an LDAP server rather than use the local credentials on the system. The topic is a little much to include on this site so I'm only providing a reference here to the official docs.
Chapter 11. Configuring Authentication from the Deployment Guide
Link to the toplevel of the Deployment Guide for RHEL 6
General steps
(excerpt from here)
install client packages
$ sudo yum install openldap openldap-clients nss_ldap
configure client's LDAP setup
On the client machines the following files need to be edited: /etc/ldap.conf and /etc/openldap/ldap.conf. They need to contain the proper server and search base information for the organization.
To do this, run the graphical Authentication Configuration Tool (system-config-authentication) and select Enable LDAP Support under the User Information tab. It's also possible to edit these files by hand.
nssswitch
On the client machines, the /etc/nsswitch.conf must be edited to use LDAP.
To do this, run the Authentication Configuration Tool (system-config-authentication) and select Enable LDAP Support under the User Information tab.
If editing /etc/nsswitch.conf by hand, add ldap to the appropriate lines.
For example:
passwd: files ldap
shadow: files ldap
group: files ldap
| how do I configure my RHEL5 or RHEL6 system to use ldap for authentication? |
1,656,029,979,000 |
I am trying to set up a Samba server to use an LDAP server for authentication only, but pull all account information (user ID etc.) from SSSD, PAM etc. Basically, the server should act as a standalone server except that the user names and passwords will be checked against LDAP.
The LDAP server only provides the posixAccount and InetOrgPerson object classes, and it is not under my control; I cannot change it.
There is a lot of information out on the Internet about making Samba work with LDAP, but invariably it involves modifying the LDAP schema. That is not an option in our situation. Nor should it be necessary, since I only want to authenticate the users against the user name and password.
Incidentally, I know that Samba does not allow authenticating with an anonymous bind (because the passwords are transmitted hashed). As far as I can tell, that's a separate issue addressed by providing an LDAP admin DN.
The system is already set up to authenticate local logins, SSH etc. via SSSD to the same LDAP server. This works and is well-tested.
Software: RedHat 7.3, Samba 4.4.4, LDAP server is Oracle LDAP (probably based on OpenLDAP, but it is outside my control).
I am looking for instructions on how to accomplish this.
|
The answer appears that it cannot be done, at least not without turning password hashing off and severely compromising security.
The problem is basically the same that first led to the creation of the smbpasswd file and utility: the Samba server never sees the plaintext password. The password is always hashed. The hashing algorithm has changed over the years. The password stored in the ordinary LDAP schema is also hashed, but using a different algorithm.
The only way to solve this is by either modifying the LDAP schema (as many sites suggest), use smbpasswd, or use Kerberos for authentication.
| Samba - use LDAP for authentication only? |
1,656,029,979,000 |
I have openldap v 2.4 running on centos7 and working but i cannot get it to log anything.
I have tried adding the below line to the rsyslog.conf file but i still do not get any log file.
LOCAL4.* /var/log/openldap/slapd.log
When i added this line i ran the below command to reload the rsyslog conf and also stopped and started openldap.
pkill -HUP rsyslog
I cant find any more instruction on how to enable logging.
|
To enable OpenLDAP debugs, you would want to add the following to your slapd.conf
loglevel <level> (eg: stats)
If you do not use slapd.conf, you may then pass that option to the slapd service. In debian/ubuntu, you would find some /etc/default/slapd file, you may update its SLAPD_OPTIONS:
$ grep SLAPD_OPTIONS /etc/default/slapd
SLAPD_OPTIONS="-s 256"
We may then restart slapd:
systemctl restart slapd
Valid slapd log levels would include:
| -1 | Enable all debugging |
| 0 | Enable no debugging |
| 1 | Trace function calls |
| 2 | Debug packet handling |
| 4 | Heavy trace debugging |
| 8 | Connection management |
| 16 | Log packets sent and recieved |
| 32 | Search filter processing |
| 64 | Configuration file processing |
| 128 | Access control list processing |
| 256 | Stats log connections, operations and results |
| 512 | Stats log entries sent |
| 1024 | Log communication with shell backends |
| 2048 | Log entry parsing debugging |
For further details, see http://www.openldap.org/doc/admin24/slapdconfig.html
Besides, as Jeff pointed it out, your syslog configuration looks wrong to begin with.
LOCAL4.* /var/log/openldap/
Should probably be:
LOCAL4.* /var/log/openldap/some-file.log
Or:
LOCAL4.* /var/log/openldap.log
| OpenLDAP v2.4 enable logging |
1,656,029,979,000 |
Perhaps my google kungfu is not doing great today, but I found ways to apparently do this for each user (one by one) on the client side, or even a way to do it from the ldap side with ldapmodify again one by one.
What I am trying to setup is ssh based LDAP login, and its working sorta, still need to restrict to groups, but thats another topic, but as of now, the default shell is /bin/sh. I don't see an option in PHPLDAPADMIN for /bin/bash.
The /bin/sh is really crippling since there is no tab completion, that is the first thing I noticed and I'm sure there is more.
How can I change the default shell for all PosixAccount's for be /bin/bash from the LDAP server so that It doesn't need to be configure on a per-user basis or per-client basis. If not possible, how can I globally change it on the client side. I/We only manage say 50 servers, so its not impossible to add that step to LDAP integration, but for tidyness and properness, I feel it should be done from the LDAP Server.
|
sudo nano /etc/phpldapadmin/templates/creation/posixAccount.xml
The Home Directory isn't really part of the question :P, but its the only two things I modified, and I figured I'd include it! Works great now though! But for users whom have already logged into server, it didn't retroactivly fix it, even if I changed the values in phpldapadmin, they still got the old home dir and /bin/sh.
<attribute id="homeDirectory">
<display>Home directory</display>
<!-- <onchange>=autoFill(homeDirectory;/home/%gidNumber|0-0/T%/%uid|3-%)</onchange> -->
<order>8</order>
<page>1</page>
</attribute>
<attribute id="loginShell">
<display>Login shell</display>
<order>9</order>
<page>1</page>
<!-- <value><![CDATA[=php.PickList(/;(&(objectClass=posixAccount));loginShell;%loginShell%;;;;loginShell)]]></value> -->
<type>select</type>
<value id="/bin/sh">/bin/sh</value>
<value id="/bin/csh">/bin/csh</value>
<value id="/bin/tsh">/bin/tsh</value>
<value id="/bin/bash">/bin/bash</value>
</attribute>
| Change default login shell to /bin/bash for ALL ldap users from LDAP server - not client |
1,656,029,979,000 |
I see that liblber is a separate package BUT searching for it, what it is, why it is needed - is never answered. Only lots off questions/comments/etc on OpenLDAP.
So, my two questions:
is liblber contained within OpenLDAP? If yes, that may explain why it does not seem to have it's own project page.
What is liblber for - what does it do? as it is seems to be presented seperately of the openLDAP - even though it seems one would never not find one without the other (more simply - What does it do? Why would I want it? And please do not say - it supports openLDAP (how does it support it, if it supports it)
Thanks!
|
(1) Yes, it's part of OpenLDAP.
(2) It implements the LDAP version of ASN.1 Basic Encoding Rules. OpenLDAP ships with documentation for this library; possibly you need to install a -dev package to get it. For example, the lber-encode(3) manpage. Unless you are a C/C++ programmer who needs to deal with LDAP, this library will probably be of little interest to you.
| Is liblber part of openLDAP? What is it's purpose? |
1,656,029,979,000 |
I am currently considering to shuffle some infrastructure around, but my question boils down to:
Can I sync a list of users and passwords to Azure AD (only for Office 365) from a linux samba server?
Currently there's an on premise Windows Server that doesn't do much apart from DNS and user management for different services through the Active Directory. My thought was: why not ditch that for a Linux Server with one of the open source replacements.
Currently I'm stuck on the office 365 user sync and posts that state that Open LDAP and other possibilities will be available in the near future. Last update: 2014.
|
I don't believe there is a tool "right now" that will allow you to synchronise accounts from a Samba DC to Azure Active Directory. You should be able to set up your spare Windows Server as a secondary Domain Controller and then synchronise from that using Azure AD Connect, though.
Another option - albeit a heavyweight one - might be to go the Federation route and use your own SAML authentication server(s) to authenticate access to Office 365. This is not for the faint-hearted; I've not yet managed to get Microsoft's Federation-based authentication to work with exclusively Microsoft software in the loop, so I suspect you would be embarking on a voyage of discovery without any knowledge that the edge of the world really wasn't a gigantic waterfall...
In practical terms you might get a more hands-on answer over on Server Fault.
| Linux and Azure AD sync possible? |
1,656,029,979,000 |
I am able to log in to an Active Directory using the userPrincipalName attribute of a user objectClass; (e.g. [email protected])
I have also set up an OpenLDAP server instance to which I can only authenticate using the dn, e.g.
"cn=somecn,cn=anothercn,ou=someou,dc=mydomain,dc=com"
How is it possible to authenticate to OpenLDAP using another field, e.g. the mail attribute of the inetOrgPerson for example?
What is more, even if such thing were possible, how would anyone ensure uniqueness of the field? (a functionality I assume is offered by AD in terms of the userPrincipalName field)
|
OpenLDAP supports two authentication methods (simple and SASL), while SASL is the default method for ldap-utils like ldapsearch.
When you are authenticating using the DN, you do a so called "simple bind".
simple bind
The simple method has three modes of operation:
anonymous
unauthenticated
user/password authenticated
For example:
# ldapwhoami -x
anonymous
or:
# ldapwhoami -x -D uid=rda,ou=people,dc=phys,dc=ethz,dc=ch -w secret1234
dn:uid=rda,ou=people,dc=phys,dc=ethz,dc=ch
SASL
OpenLDAP clients and servers are capable of authenticating via the Simple Authentication and Security Layer (SASL) framework, which is detailed in RFC4422. SASL supports several authentication mechanisms. The most common mechanisms with OpenLDAP are EXTERNAL and GSSAPI.
The EXTERNAL mechanism makes use of an authentication performed by a lower-level protocol: usually TLS or Unix IPC. For example using Unix IPC as user root:
# ldapwhoami -Y EXTERNAL -H ldapi://
SASL/EXTERNAL authentication started
SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth
SASL SSF: 0
dn:gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth
The authenitcated user is mapped to a DN in the tree cn=peercred,cn=external,cn=auth.
The GSSAPI mechanism usually means Kerveros 5. If you have a Kerberos 5 infrastructure deployed you can use Kerberos Principals for authentication.
First authenticate against the KDC and get a TGT:
# kinit rda
Password for [email protected]: secret1234
Then you can use GSSAPI for authentication against OpenLDAP:
# ldapwhoami
SASL/GSSAPI authentication started
SASL username: [email protected]
SASL SSF: 56
SASL data security layer installed.
dn:uid=rda,cn=gssapi,cn=auth
The principal [email protected] is mapped to a DN in the tree cn=gssapi,cn=auth.
Now you can map that authenticated DN to an actual DN in the database using a regular expression with the olcAuthzRegexp configuration in cn=config:
dn: cn=config
objectClass: olcGlobal
cn: config
olcAuthzRegexp: {0}uid=([^,/]*),cn=phys.ethz.ch,cn=gssapi,cn=auth uid=$1,ou=people,dc=phys,dc=ethz,dc=ch
...
This olcAuthzRegexp line maps any user principal in the realm PHYS.ETHZ.CH to a corresponding posixAccount entry under ou=people,dc=phys,dc=ethz,dc=ch which has the same username in the uid attribute.
For example with the following posix entry
# ldapsearch uid=rda
dn: uid=rda,ou=people,dc=phys,dc=ethz,dc=ch
objectClass: inetOrgPerson
objectClass: posixAccount
objectClass: krbPrincipalAux
objectClass: krbTicketPolicyAux
uid: rda
krbPrincipalName: [email protected]
...
ldapwhoami will show:
# ldapwhoami
SASL/GSSAPI authentication started
SASL username: [email protected]
SASL SSF: 56
SASL data security layer installed.
dn:uid=rda,ou=people,dc=phys,dc=ethz,dc=ch
The mapping using olcAuthzRegexp must match a unique entry in the DIT. This is to be ensured by the administrator or the managing software.
| OpenLDAP vs Active Directory authentication mechanisms |
1,656,029,979,000 |
I'm trying to use the 'userCertificate' attribute to hold a 'der' file.
I can happily add my certificate using the ldif:
dn: cn=bob,ou=users,dc=home
changetype: modify
add: userCertificate;binary
userCertificate;binary:< file:///home/bob/cert.der
I see my certificate in base64 encoding when I do an ldapsearch and life seems good. But when I try to use ldapcompare:
ldapcompare -D"cn=admin,dc=home" -W "cn=bob,ou=users,dc=home" "userCertificate;binary:< file:///home/bob/cert.der"
I get the error:
Compare Result: Invalid syntax (21)
Additional info: unable to normalize value for matching
UNDEFINED
I get the same error if I try to compare using the base64 encoding
ldapcompare -D"cn=admin,dc=home" -W "cn=bob,ou=users,dc=home" "userCertificate:: base64encodedStringOfStuff"
Any ideas?
|
I just get this error: ldap_modify: Undefined attribute type (17) additional info: usercertificate: requires ;binary transfer.
This error message pretty clearly refers to what's mandated in RFC 4523, section 2.1. You simply always have to append ;binary to the attribute name in all LDAP operations affecting attribute userCertificate.
ldap_msgfree ldap_err2string Compare Result: Invalid syntax (21) Additional info: unable to normalize value for matching UNDEFINED
When using compare operation you have to look at which EQUALITY matching rule is available for the assertion attribute.
In subschema userCertificate is declared with EQUALITY certificateExactMatch based on issuer name and serial (see RFC 4523 section 2.5) which means there's no pure octet string match available for that attribute.
So you need to extract the decimal serial number and the issuer DN (LDAP string representation) from the certificate:
$ openssl x509 -noout -nameopt rfc2253 -serial -issuer -inform der -in ~/certs/[email protected]
serial=0F560E
issuer=CN=StartCom Class 1 Primary Intermediate Client CA,OU=Secure Digital Certificate Signing,O=StartCom Ltd.,C=IL
Convert hex serial to decimal which is 1005070 in this example and invoke ldapcompare like this:
ldapcompare "cn=Michael Strö[email protected],dc=stroeder,dc=de" 'userCertificate;binary:{ serialNumber 1005070, issuer "cn=StartCom Class 1 Primary Intermediate Client CA,ou=Secure Digital Certificate Signing,o=StartCom Ltd.,c=IL"}'
TRUE
Additional notes:
Be aware that DNs are complex beasts with escaping of special
characters which need special treatment on shell command-line.
Therefore I'd use a scripting language for this task to avoid
some of the trouble.
In opposite to modify operations and attribute retrieving you don't need the ;binary transfer type for compare operation. But with OpenLDAP it won't hurt either. Not sure about other LDAP server implementations.
| Problems with ldap userCertificate attribute |
1,656,029,979,000 |
I am trying to setup Kerberos MIT authentication with OpenLdap autorisation on Debian Jessie.
Authentication part is working great, as I can login to SSH using my kerberos account.
I even can create user home directory using pam_mkhomedir.so
However, I can't use Ldap posixGroup to allow access to my SSH server :
Here is the output of getent group
getent group | grep tupac
adminsLinux:*:3000:uid=tupac,ou=people,dc=maytacapac,dc=inc
Here is the extract of sshd_config :
AllowGroups adminsLinux
Here are the log files :
sshd[3305]: User tupac from a.b.c.d not allowed because not in any group
sshd[3305]: input_userauth_request: invalid user tupac [preauth]
sshd[3305]: Connection closed by a.b.c.d [preauth]
Here is id tupac output
uid=20003(tupac) gid=20003 groups=20003
tupac user is not part of any Unix group, but I want to allow access based on OpenLdap posixGroup. I have thought /etc/nsswitch.conf configuration and getent group resolution would be enough to grant access according to posixGroups.
|
uid=tupac,ou=people,dc=maytacapac,dc=inc looks wrong, as most (okay, all except this one) LDAP-provided group memberships I've seen do not include a LDAP DN, and instead just the username, so I would expect to instead see adminsLinux:*:3000:tupac.
| getent group working, but sshd_config allowgroups does not retrieve appropriate group |
1,656,029,979,000 |
I don't explain how to configure tls-ldap on server, on google there is a lot of stuff to configure it(create tls certs, create ldif, import ldif, try ldapsearch -ZZ, etc..).
Is also easy to force the tls from server, so connection without -Z or -ZZ are refused
ldapsearch -LLL -D "cn=ldapadm,dc=ldap1,dc=mydom,dc=priv" -wPASSWORD -b dc=ldap1,dc=mydom,dc=priv uidNumber=10009 uidNumber
ldap_bind: Confidentiality required (13)
additional info: TLS confidentiality required
with ldapsearch using -Z is ok
ldapsearch -Z -LLL -D "cn=ldapadm,dc=ldap1,dc=mydom,dc=priv" -wPASSWORD -b dc=ldap1,dc=mydom,dc=priv uidNumber=10009 uidNumber
dn: sambaSID=S-1-5-21-38458588-165473958-13245875-1289,ou=idmap,dc=ldap1,dc=mydom,dc=priv
uidNumber: 10009
I have force TLS using this ldif on server
dn: olcDatabase={1}mdb,cn=config
changetype: modify
add: olcSecurity
olcSecurity: tls=1
Now the problem, even with the tls force the password can be sniffed from the lan
I run the command without -Z, connection is refused
ldapsearch -LLL -D "cn=ldapadm,dc=ldap1,dc=mydom,dc=priv" -wPASSWORD -b dc=ldap1,dc=mydom,dc=priv uidNumber=10009 uidNumber
ldap_bind: Confidentiality required (13)
additional info: TLS confidentiality required
but tcpdump can see the password!
tcpdump -i any port 389 -vvv -Xx|egrep --colour cn= -A 11
tcpdump: listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
blah blah blah blah blah blah blah blah blah blah .`7...."cn=ldapa
blah blah blah blah blah blah blah blah blah blah dm,dc=ldap1,dc=m
blah blah blah blah blah blah blah blah blah blah ydom,dc=priv..PAS
blah blah blah blah blah blah blah blah blah blah SSWORDCLEAR!
....
The question is simple, is possible to force ldapsearch and all ldap clients to use -Z when tls is forced on server?
If not possible as it seems, I can propose some "ideas"
1)An rc file with options for ldap clients which contain the options to pass for clients, for example
LDAPSEARCH_OPTIONS="-ZZ"
2)A mechanism wich recognize the tls forced server and automatic enable the -ZZ with the exception for localhost or ldapi.
|
You can try adding an LDAP extended operation for STARTTLS onto the URI in your client LDAP configuration file (e.g. ~/.ldaprc or /etc/ldap/ldap.conf).
URI ldap://<ldap-server>/????1.3.6.1.4.1.1466.20037
I seem to have had some success with that. Although I find the option TLS_REQCERT demand either stops working or I don't quite understand how STARTTLS interacts with the certificate options in the LDAP config files. I.e. Using the above extended operation config, I have still seen the session try STARTTLS with a server that does not support STARTTLS and then go back to clear text.
UPDATE: If you add an exclamation point (!) before the OID, then this seems to prevent the client from failing back to cleartext. E.g.
URI ldap://<ldap-server>/????!1.3.6.1.4.1.1466.20037
So it appears that the client (e.g. ldapsearch) command line option:
-Z is equivalent to adding ????1.3.6.1.4.1.1466.20037 to the URI
-ZZ is equivalent to adding ????!1.3.6.1.4.1.1466.20037 to the URI
END UPDATE
References with some hints:
https://lists.openldap.org/hyperkitty/list/[email protected]/thread/SXWOL5SVSLWSNX35QFPVP6BPSLSHWOYG/#SXWOL5SVSLWSNX35QFPVP6BPSLSHWOYG
https://www.openldap.org/lists/openldap-devel/200202/msg00070.html
https://ldapwiki.com/wiki/StartTLS
https://www.rfc-editor.org/rfc/rfc4511#page-40
https://git.openldap.org/search?utf8=%E2%9C%93&snippets=false&scope=&repository_ref=master&search=LDAP_EXOP_START_TLS&group_id=13&project_id=1
| openldap: is possible to force the starttls from a client? |
1,656,029,979,000 |
On a Debian Buster installation I have just installed the OpenLDAP server slapd with:
~$ sudo apt install slapd ldap-utils
~$ sudo dpkg-reconfigure slapd
On its setup with default options I was prompted to give an organisation name. I used home, so I get
~$ ldapsearch -x -LLL -b dc=hoeft-online,dc=de
dn: dc=hoeft-online,dc=de
objectClass: top
objectClass: dcObject
objectClass: organization
o: home
dc: hoeft-online
dn: cn=admin,dc=hoeft-online,dc=de
objectClass: simpleSecurityObject
objectClass: organizationalRole
cn: admin
description: LDAP administrator
Now I try to add an organisational unit (ou) to the organisation o: home as shown in the output but without success:
~$ cat add.ldif
dn: ou=posix,o=home,dc=hoeft-online,dc=de
objectClass: organizationalUnit
ou: posix
~$ ldapadd -xWD cn=admin,dc=hoeft-online,dc=de -f add.ldif
Enter LDAP Password:
adding new entry "ou=posix,o=home,dc=hoeft-online,dc=de"
ldap_add: No such object (32)
matched DN: dc=hoeft-online,dc=de
Adding an ou to the domainComponent (dc) works:
~$ cat add.ldif
dn: ou=posix,dc=hoeft-online,dc=de
objectClass: organizationalUnit
ou: posix
~$ ldapadd -xWD cn=admin,dc=hoeft-online,dc=de -f add.ldif
Enter LDAP Password:
adding new entry "ou=posix,dc=hoeft-online,dc=de"
What I'm missing here? Isn't it possible to add an organizationalUnit (ou) to an organisation (o)? If not, why? Where is it defined?
|
Are you certain that your LDAP directory contains the dn o=home,dc=hoeft-online,dc=de? The error suggests that it does not, but as you have not pasted the output of an appropriate ldapsearch command it's hard to tell. Is suspect that is the issue, because otherwise I am unable to reproduce your problem.
I'm starting with OpenLDAP 2.4.47 on Debian Stretch. I have a database for dc=example,dc=com; I start with:
$ ldapsearch ... -D cn=admin,dc=example,dc=com -x -w admin -b dc=example,dc=com -LLL
dn: dc=example,dc=com
objectClass: top
objectClass: dcObject
objectClass: organization
o: Example Inc.
dc: example
I can then add a new o=home using this LDIF:
dn: o=home,dc=example,dc=com
objectClass: organization
o: home
Which looks like:
$ ldapadd -D cn=admin,dc=example,dc=com -x -W -f add-org.ldif
Enter LDAP Password:
adding new entry "o=home,dc=example,dc=com"
And then I can add an ou=posix like this:
dn: ou=posix,o=home,dc=example,dc=com
objectClass: organizationalUnit
ou: posix
Which looks like:
$ ldapadd -D cn=admin,dc=example,dc=com -x -W -f add-ou.ldif
Enter LDAP Password:
adding new entry "ou=posix,o=home,dc=example,dc=com"
And when I'm done:
$ ldapsearch ... -D cn=admin,dc=example,dc=com -x -w admin -b dc=example,dc=com -LLL
dn: dc=example,dc=com
objectClass: top
objectClass: dcObject
objectClass: organization
o: Example Inc.
dc: example
dn: o=home,dc=example,dc=com
objectClass: organization
o: home
dn: ou=posix,o=home,dc=example,dc=com
objectClass: organizationalUnit
ou: posix
| Add organisational unit (ou) to organisation (o) in OpenLDAP |
1,656,029,979,000 |
I have installed OpenLDAP with yum, but I have accidentally deleted some of the config files. I am not able to recover them. I want to uninstall it. I tried the following command but it ends with an error:
--> Processing Dependency: PackageKit-glib = 0.5.8-20.el6 for package: PackageKit-gtk-module-0.5.8-20.el6.x86_64
--> Running transaction check
---> Package PackageKit-device-rebind.x86_64 0:0.5.8-20.el6 will be erased
---> Package PackageKit-gstreamer-plugin.x86_64 0:0.5.8-20.el6 will be erased
---> Package PackageKit-gtk-module.x86_64 0:0.5.8-20.el6 will be erased
--> Finished Dependency Resolution Error: Trying to remove "yum", which is protected You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest
Can someone please tell how to uninstall it properly so I can install it again and make the config changes?
|
Can you make a backup of the configuration and :
yum remove openldap
rpm -e openldap.package_name
yum install openldap
And copy your configuration files back
| How to uninstall OpenLDAP in RedHat? |
1,656,029,979,000 |
I'm currently trying to set up an integrated Kerberos V/LDAP system for authentication/authorization. From what I have managed to gather, there are at least two ways to integrate Kerberos V with LDAP:
Use LDAP as a backend to store Kerberos principals
User Kerberos and SASL authentication via GSSAPI to authenticate to the LDAP server (to be able to query and modify the LDAP entries using a Kerberos ticket)
The two options aren't mutually exclusive. The thing is, I would like to make a hybrid of the two: not just use LDAP to store Kerberos principals, but also make sure that when I add a Kerberos principal, it's created with objectClass=posixAccount in order for it to appear as a Unix user entry to NSS for authorization purposes.
To paraphrase, I would like to have to add new user accounts in only one place (viz., the kadmin server) instead of two. Is this possible? If so, how?
If it's any help, I'm using OpenLDAP and MIT Kerberos on Debian Wheezy.
|
Since nobody answered this question and I found no existing solutions to this, I rolled my own solution in Perl. The solution is geared towards Debian specifically because this is my target environment. Feel free to fork it and adapt it to your needs.
Your comments on the code/coding style would be very appreciated. Feel free to tear it to shreds: I'm not a developer by training and I'm relatively new to scripting administrative tools.
| Integrating LDAP and Kerberos V to add users via a useradd-like interface |
1,656,029,979,000 |
I feel like I am overlooking something obvious.
My goal is to run a systemd service with an LDAP service account at system startup. THAT WAY, if I disable the account in LDAP, then the next time the service attempts to start, it will fail, because the user is not authorized. (Eventually, I need to get my Kerberos setup for the service to use ticketing, but I'm not there yet, and that may be my problem overall here)
I have a functioning LDAP that I can use to control user logins, so my user is cn=cbrand,ou=people,dc=jcolebrand,dc=info and I can login with this user on the connected machines in my network. I've got a sudoers setup through LDAP, so myself and one other user on my system can log in and run sudo commands, but others who have login privileges can't run sudo commands. I have other users who can use LDAP to authenticate to various applications on my servers (so they don't have system login privileges, but they do have basic accounts with passwords).
I have ldap2pg setup to handle readers/writers/superusers on my system which I would like to use to also restrict access for services running on the network that are connected to the LDAP instance.
I would like to be able to define an account in LDAP for some service X (as a real-world example, I'm going to install gogs, or perhaps I will migrate my jellyfin service to run under a similar account as well), and then use that service account to run the service (so I would expect something like cn=gogs,ou=services,dc=jcolebrand,dc=info or cn=jellyfin,ou=services,dc=jcolebrand,dc=info). What I would like to avoid doing is to have to create a local user/group by hand to run as, because that would have to be managed per-server. Instead I would like to be able to use ldap2pg to revoke database access, or use LDAP to deny login/access privileges at the directory level.
If I were going to create a local account, I would just modify my service target to use the User= directive, but that seems incorrect for connecting to LDAP, especially if I want to force a password (this may require some automation around rotating that password for the service account, that's another story, I suspect, but maybe not! I know I don't know enough about this to know if I'm overlooking something "obvious".)
As a partial configuration snippet, if this helps, I have:
cat /etc/openldap/ldap.conf
<snip>
BASE dc=jcolebrand,dc=info
<snip>
If this were a Windows Active Directory Domain I would use gMSA accounts and this would be seamless, or I would even be able to just create a generic service account in the directory with a password, but I do not know how to do this in Fedora/Linux. Specifically what I do not know how to replicate from a Windows world is this step:
Windows -> Open services.msc -> Find the service -> Goto Properties -> set the user credentials on the security tab to either the gMSA or the account/password, as appropriate.
Looking at the systemd docs I see the LoadCredential documentation, but that does not seem to make sense (by the way the documentation is written) for supplying the password to run as an LDAP account.
Am I overthinking this and I should have a passwordless objectClass applicationProcess LDAP entry and set the service configuration for User="cn=gogs,ou=services,dc=jcolebrand,dc=info" and then just stop doing extra thinking?
Nota bene: Eventually I would also like this same service to talk to postgres using the same information, but that likely just requires an appropriate connection string, which I can use the environment file for the systemd service configuration. Pointers for this are also welcome.
|
I would create a linux system user (an objectClass: posixAccount in LDAP) and simply set the systemd unit file to have User=< uid value from the LDAP record
If PAM is properly configured, this should populate a uid and linux username which systemd will be able to spawn the executable process under. You'll need a service user with a linux UID on the maine chfor the kernel to track the process anyway, so I don't think you can avoid using objectClass: posixAccount
Your LDAP record will likely look something like:
dn: cn=gogs,ou=services,dc=jcolebrand,dc=info
objectClass: posixAccount
uid: gogs
cn: gogs
loginShell: /bin/nologin
uidNumber: 9999
gidNumber: 9999
homeDirectory: /path/to/gogs/install
description: gogs service user
I believe that you can add objectClass applicationProcess to this entry as well, should you want to.
You can then either edit the gogs systemd unit directly, or create an override (E.G. with sudo systemctl edit gogs.service) and add:
[Service]
User=gogs
(you may need a objectClass: posixGroup entry for gogs as well, and then you can use Group=gogs in the systemd unit)
I am confused by these parts of your question:
would just modify my service target to use the User= directive, but that seems incorrect for connecting to LDAP, especially if I want to force a password
and
Looking at the systemd docs I see the LoadCredential documentation, but that does not seem to make sense for supplying the password to run as an LDAP account
Under Linux, service accounts generally don't have passwords in my experience. They're simply a separate user that the processes for that service run as. Database authentication can be done with a password string (which you could distribute using LDAP) or via Kerberos (probably? I'm not sure how the service gets a ticket) but the service POSIX account has no password -- because nobody ever logs in to it.
| Set a service to run as account in LDAP with credentials |
1,656,029,979,000 |
I have setup ldap server successfully and everything works find. However, i cannot access the server with 'anonymous' bind, which according to every google search it should be.
When I execute;
# ldapsearch -x -H ldap://localhost -b dc=example,dc=com
output says;
# result: 50 Insufficient access
Note: the only ACL exist is;
olcAccess: {0}to * by self write by anonymous auth by * none
Does this prove that the server is not compiler or configured to work with anonymous bind? And if so, what is the best way to enable it?
|
Someone from ServerFault answered this question.
https://serverfault.com/questions/748758/enable-anonymous-bind-in-openldap/748904#748904
Issue was the configured ACL. The line of by * none block most anonymous actions. Once get rid of that server allows to perform anonymous ldapsearch actions, proving, by default, openldap support anonymous bind.
| Enable anonymous bind in openldap |
1,656,029,979,000 |
I have modified ldap.conf and slapd.conf. I'm wondering how I can restart the ldap/client service, filesystem/autofs and name-service/cache.
OS: Solaris 11 but advice on linux should help too
|
svcadm restart ldap/client should do the trick. Depending on what you're running you might also need to restart filesystem/autofs
| Restarting LDAP client service |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.