date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,582,061,970,000 |
So, let's say I have an array with a list of URLs, and I want to use something such as GNU parallel to download the URLs in parallel. A command like this would do the trick.
parallel -u wget -qc --show-progress ::: "${URLs[@]}"
The only issue with this command is while the progress of the commands is shown (especially since -u shows the output as it happens rather than waiting) the output of the commands all go to the same line. This results in a scenario in which when one wget instance outputs it overwrites the progress of the previous wget output on the tty. So, I was wondering how to have each wget output on it's own line at the same time?
|
parallel --ll wget -qc --show-progress ::: "${URLs[@]}"
The --ll option is in alpha testing, but works in my test.
| Run one-line programs in parallel on separate lines |
1,582,061,970,000 |
Does GNU Parallel start a batch of as many jobs as possible (the number of jobs started being governed by GNU Parallel internals or/and the -j option along with given parameters), and once complete, then start the next batch of jobs and so on?
Context
I want to learn how to better handle timestamps related to jobs (start time, end time and then running time) and GNU Parallel. As an example here, I would like to understand if I can make use of the timestamps in my custom logs, recorded via a custom log function, which come just before executing the actual processing command, always inside a for loop that is passed to GNU Parallel. Can they give me the running time of the actual processing commands?
Details
Inside a for loop, passed then to GNU Parallel along with --joblog, I have put two commands : the first command is a custom log command including some timestamping, just before the second command which does the actual processing of interest. The timing of the custom log command is not of direct interest -- it is yet another logging command. Unfortunately, I was not aware of how the --joblog option works -- as explained here GNU Parallel --joblog logs only first line of commands inside a for loop, it only logs the first command.
Trying to make sense of the logs I have, I use mlr to show the first three lines of a --joblog output
❯ mlr --itsv --oxtab head -n 3 parallel/parallel.job.4437.3.log
Seq 1
Host :
Starttime 1670106266.417
JobRuntime 0.000
Send 0
Receive 0
Exitval 0
Signal 0
Command log /scratch/pvgis/job.4437.3/Process_2022_12_02_23_15_50_10m_u_component_of_wind_2008.log Action=Metadata, Map=era5_and_land_10m_u_component_of_wind_2008_band_79_merged_scaled.nc, Hours since=946704, Longname=10 metre U wind component, Units=m s**-1
Seq 2
Host :
Starttime 1670106266.419
JobRuntime 0.009
Send 0
Receive 0
Exitval 0
Signal 0
Command log /scratch/pvgis/job.4437.3/Process_2022_12_02_23_15_50_10m_u_component_of_wind_2008.log Action=Metadata, Map=era5_and_land_10m_u_component_of_wind_2008_band_39_merged_scaled.nc, Hours since=946705, Longname=10 metre U wind component, Units=m s**-1
Seq 3
Host :
Starttime 1670106266.422
JobRuntime 0.012
Send 0
Receive 0
Exitval 0
Signal 0
Command log /scratch/pvgis/job.4437.3/Process_2022_12_02_23_15_50_10m_u_component_of_wind_2008.log Action=Metadata, Map=era5_and_land_10m_u_component_of_wind_2008_band_28_merged_scaled.nc, Hours since=946706, Longname=10 metre U wind component, Units=m s**-1
The above doesn't refer to the running time of the command gdalmerge_and_clean which I am interested in. Nevertheless, I thought that the logged starting time should differ in-between each logged line as the running time of all commands that are executed (in batches?) in an iteration of a for loop passed to GNU Parallel. I guess this is not the case and GNU Parallel is very precise in what it logs which is exactly the running time of the very command it reads first.
The differences between successive Starttime records (below shown the first 10 lines)
mlr --itsv --opprint step -a delta -f Starttime then rename Starttime_delta,Delta then cut -f Starttime,JobRuntime,Delta parallel/parallel.job.4437.3.log |head
are
Starttime JobRuntime Delta
1670106266.417 0.000 0
1670106266.419 0.009 0.0019998550415039062
1670106266.422 0.012 0.003000020980834961
1670106266.424 0.014 0.002000093460083008
1670106266.427 0.013 0.003000020980834961
1670106266.434 0.012 0.006999969482421875
1670106266.439 0.021 0.004999876022338867
1670106266.442 0.019 0.003000020980834961
1670106266.446 0.018 0.004000186920166016
..
and so on it goes. The average Delta
mlr --itsv --opprint step -a delta -f Starttime then rename Starttime_delta,Delta then cut -f Starttime,JobRuntime,Delta then stats1 -a mean -f Delta parallel/parallel.job.4437.3.log
is
Delta_mean
0.33402504553451784
which obviously concerns to the log commands. Unlikely the gdalmerge_and_clean commands are so fast.
Nonetheless, from the custom log commands, I can compute the overall duration of all Jobs ran from the overall Start and End timestamps
Action=Processing, Start=2022-12-02 23:15:50
Action=Processing, End=2022-12-04 02:16:43
which is very useful. However, I want to know more about each and every single Job ran during this "Processing". This is why there is a log command to record a timestamp just before executing an actual gdalmerge_and_clean command.
These log lines look like so:
..
size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:48:15
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_210_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:48:15
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_211_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:48:15
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_212_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:48:15
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_213_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:48:15
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_214_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:48:15
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_215_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:48:15
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_216_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:48:15
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_217_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:48:15
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_218_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:48:15
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_219_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:48:15
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_220_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:48:15
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_221_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:48:15
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_222_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:48:15
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_223_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:48:15
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_224_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:48:15
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_225_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:48:15
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_226_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:48:15
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_227_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:48:15
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_228_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:48:15
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_229_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:48:15
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_230_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:48:15
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_231_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:48:15
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_232_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:55:45
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_233_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:55:45
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_234_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:55:45
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_235_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:55:45
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_236_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:55:45
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_237_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:55:45
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_238_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:55:45
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_239_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:55:45
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_240_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:55:45
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_241_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:55:45
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_242_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:55:45
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_243_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:55:45
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_244_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:55:45
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_245_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:55:45
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_246_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:55:45
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_247_merged.nc, Pixel
size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:55:45
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_248_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:55:45
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_249_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:55:45
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_250_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:55:45
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_251_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:55:45
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_252_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:55:45
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_253_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:55:45
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_254_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:55:45
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_255_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:56:02
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_256_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:56:02
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_257_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:56:02
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_258_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:56:02
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_259_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:56:02
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_260_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:56:02
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_261_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:56:02
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_262_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:56:02
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_263_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:56:02
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_264_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:56:02
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_265_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:56:02
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_266_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:56:02
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_267_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:56:02
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_268_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:56:02
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_269_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:56:02
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_270_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:56:02
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_271_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:56:02
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_272_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:56:02
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_273_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:56:02
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_274_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:56:02
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_275_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:56:02
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_276_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:56:02
Action=Merge, Output=era5_and_land_10m_u_component_of_wind_2008_band_277_merged.nc, Pixel size=0.099999998304108 -0.100000000000000, Timestamp=2022-12-02 23:56:02
..
Using mlr to compute the differences, again, between the logged timestamp, maybe there is something useful? The non-zero differences concern then the timestamps of batches of jobs started in different moments (I think this is useful because there are many jobs with the same start time, since they run in parallel via GNU Parallel, right?):
mlr --ocsv grep 'Action=Merge, Output' then clean-whitespace then put '$Seconds = localtime2sec($Timestamp)' then step -a delta -f Seconds then cut -f Timestamp,Seconds,Seconds_delta then cat -n then rename n,Job,Seconds_delta,Delta then filter '$Delta != 0' jobs/Process_2022_12_02_23_15_50_10m_u_component_of_wind_2008.log
are
Job,Timestamp,Seconds,Delta
209,2022-12-02 23:48:15,1670024895,625
232,2022-12-02 23:55:45,1670025345,450
255,2022-12-02 23:56:02,1670025362,17
278,2022-12-02 23:56:19,1670025379,17
291,2022-12-02 23:56:20,1670025380,1
301,2022-12-02 23:56:36,1670025396,16
324,2022-12-02 23:56:56,1670025416,20
347,2022-12-02 23:57:11,1670025431,15
370,2022-12-02 23:57:25,1670025445,14
393,2022-12-02 23:57:38,1670025458,13
..
8570,2022-12-03 21:18:20,1670102300,94
8593,2022-12-03 21:19:48,1670102388,88
8616,2022-12-03 21:21:56,1670102516,128
8639,2022-12-03 21:23:54,1670102634,118
8662,2022-12-03 21:25:42,1670102742,108
8685,2022-12-03 21:26:00,1670102760,18
8708,2022-12-03 21:27:12,1670102832,72
8731,2022-12-03 21:28:24,1670102904,72
8754,2022-12-03 21:29:19,1670102959,55
8777,2022-12-03 21:29:59,1670102999,40
Maybe these differences do tell something about how long it took, more or less, for each individual job ran inside a GNU Parallel-ised for loop. ?
|
GNU Parallel starts a job when there is a free job slot. The number of job slots is given by -j/--jobs and defaults to the number of CPU threads.
Let us assume your server has 8 CPU threads.
When you start GNU Parallel it will spawn 8 jobs immediately. When a job finishes, the info is logged (in --joblog), and a new job is spawned.
So if all your jobs take exactly the same time, it will seem as if GNU Parallel spawns jobs in batches. But it does not. This should make it easier to see what is going on:
seq 1000 | parallel --lb --joblog my.log 'echo Starting {};sleep {};echo Ending {}'
In general it seems using gdalmerge_and_clean is a really bad way of learning how to use GNU Parallel. Instead use much simpler examples to learn from, and then apply what you have learned to gdalmerge_and_clean.
| Understanding timestamps in a GNU Parallel --joblog output |
1,582,061,970,000 |
I have the following command in my Makefile
parallel \
--eta \
--bar \
--joblog mnist/embedder.joblog \
pipenv run python3 \
-m mnist.train_embedder \
--embedder_name {1} \
--embedder_dim {2} \
--embedder_lr {3} \
--embedder_epochs {4} \
:::: grid/embedder_name \
:::: grid/embedder_dim \
:::: grid/embedder_lr \
:::: grid/embedder_epochs
Each file contains something like
$ cat grid/embedder_name
ae
cnn
$ cat grid/embedder_dim
24
32
48
64
A dry run results in
$ parallel \
--dry-run \
--eta \
--joblog mnist/embedder.joblog \
pipenv run python3 \
-m mnist.train_embedder \
--embedder_name {1} \
--embedder_dim {2} \
--embedder_lr {3} \
--embedder_epochs {4} \
:::: grid/embedder_name \
:::: grid/embedder_dim \
:::: grid/embedder_lr \
:::: grid/embedder_epochs
Computers / CPU cores / Max jobs to run
1:local / 96 / 8
Computer:jobs running/jobs completed/%of started jobs/Average seconds to complete
ETA: 0s Left: 8 AVG: 0.00s local:8/0/100%/0.0s pipenv run python3 -m mnist.train_embedder --embedder_name ae --embedder_dim 24 --embedder_lr 0.001 --embedder_epochs 32
ETA: 0s Left: 7 AVG: 0.00s local:7/1/100%/0.0s pipenv run python3 -m mnist.train_embedder --embedder_name ae --embedder_dim 32 --embedder_lr 0.001 --embedder_epochs 32
ETA: 0s Left: 6 AVG: 0.00s local:6/2/100%/0.0s pipenv run python3 -m mnist.train_embedder --embedder_name ae --embedder_dim 48 --embedder_lr 0.001 --embedder_epochs 32
ETA: 0s Left: 5 AVG: 0.00s local:5/3/100%/0.0s pipenv run python3 -m mnist.train_embedder --embedder_name ae --embedder_dim 64 --embedder_lr 0.001 --embedder_epochs 32
ETA: 0s Left: 4 AVG: 0.00s local:4/4/100%/0.0s pipenv run python3 -m mnist.train_embedder --embedder_name cnn --embedder_dim 24 --embedder_lr 0.001 --embedder_epochs 32
ETA: 0s Left: 3 AVG: 0.00s local:3/5/100%/0.0s pipenv run python3 -m mnist.train_embedder --embedder_name cnn --embedder_dim 32 --embedder_lr 0.001 --embedder_epochs 32
ETA: 0s Left: 2 AVG: 0.00s local:2/6/100%/0.0s pipenv run python3 -m mnist.train_embedder --embedder_name cnn --embedder_dim 48 --embedder_lr 0.001 --embedder_epochs 32
ETA: 0s Left: 1 AVG: 0.00s local:1/7/100%/0.0s pipenv run python3 -m mnist.train_embedder --embedder_name cnn --embedder_dim 64 --embedder_lr 0.001 --embedder_epochs 32
ETA: 0s Left: 0 AVG: 0.00s local:0/8/100%/0.0s
The argument list is still growing, and if I want to add an argument between --embedder_name and --embedder_dim, I must edit --embedder_dim {2} to --embedder_dim {3}, --embedder_lr {3} to --embedder_lr {4}, and so on and so forth. This is tedious and error-prone.
Can I make the positional arguments into named arguments? I imagine something like the following
parallel \
--eta \
--bar \
--joblog mnist/embedder.joblog \
pipenv run python3 \
-m mnist.train_embedder \
--embedder_name {embedder_name} \
--embedder_dim {embedder_dim} \
--embedder_lr {embedder_lr} \
--embedder_epochs {embedder_epochs} \
:::: grid/embedder_name \
:::: grid/embedder_dim \
:::: grid/embedder_lr \
:::: grid/embedder_epochs
While this creates a lot of duplication of the names (the string embedder_name appears thrice in the command!), at least it's more robust to accidentally incorrect order of arguments.
In case it's relevant
$ uname -a
Linux t1v-n-5d019513-w-0 5.13.0-1023-gcp #28~20.04.1-Ubuntu SMP Wed Mar 30 03:51:07 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
|
--header : does almost what you want.
It is made for CSV-files where the first line is a header.
So you need to prepend 'grid/embedder_name' with 'name':
$ cat grid/embedder_name
name
ae
cnn
$ cat grid/embedder_dim
dim
24
32
48
64
Also you do not need :::: between every file. A single one is enough (but if you find it easier to read, keep them):
parallel --header : echo {name} {dim} :::: grid/embedder_dim grid/embedder_name
(@r_31415 shows a non-working example where the name is put on the command line. You do this if you use ::: - not :::: Example: parallel --header : echo {foo} ::: foo 1 2 3)
Edit:
From version 20220822 you can do:
parallel --header 0 echo {grid/embedder_name} {grid/embedder_dim} :::: grid/embedder_dim grid/embedder_name
parallel --header 0 echo {embedder_name} {embedder_dim} :::: embedder_dim embedder_name
For this the files should not have a header in the first line.
| GNU Parallel with named arguments |
1,582,061,970,000 |
I ran into a strange problem.I am running this example from this https://www.gnu.org/software/parallel/parset.html. But it is not working inside the script file.
parset myarray seq 3 ::: 4 5 6
echo "${myarray[1]}"
I am getting the following error if I ran the script file
Unknown option: myarray
Unknown option: seq
Unknown option: 3
Unknown option: :::
Unknown option: 4
Unknown option: 5
Unknown option: 6
parset only works if it is a function. The function is defined as part of env_parallel.
Do the below and restart your shell.
But if I use the command directly in a terminal it works.What am I doing wrong here
|
In short: You need to do what the error message tells you to do.
Longer version: There are two things called parset. The first is a shell script that tells you how to enable the function version. That's the entire purpose of this script, to provide setup instructions for people trying to run parset without having first loaded the function definition (and when the function is defined, it takes precedence over the script, so running parset runs the function instead of the script)
The second is a shell function that actually does the work (why does it have to be a function? Because a function running in a shell can modify its own environment while a child process can not modify the environment of its parent. If it were a script, it would be a child process of the parent shell and unable to do its job). That function needs to be defined in the shell that uses it.
To define the function, you need to source env_parallel.$SHELL in your script before you use the functions it defines. That's probably being done in your shell login startup scripts (e.g. ~/.bash_profile) but not in your non-login startup scripts (e.g. ~/.bashrc), which is why it works from your terminal but not from a script.
In other words, if your script is run with bash as the interpreter and the env_parallel.* scripts are in /usr/bin/, add the following somewhere near the start of your script:
. /usr/bin/env_parallel.bash
IMPORTANT: source the appropriate env_parallel.SHELL for the interpreter you're running your script with. e.g. on my debian system, parallel provides the following:
$ ls -l /usr/bin/env_parallel*
-rwxr-xr-x 1 root root 4749 Aug 29 2021 /usr/bin/env_parallel
-rwxr-xr-x 1 root root 14565 Aug 29 2021 /usr/bin/env_parallel.ash
-rwxr-xr-x 1 root root 13565 Aug 29 2021 /usr/bin/env_parallel.bash
-rwxr-xr-x 1 root root 5377 Aug 29 2021 /usr/bin/env_parallel.csh
-rwxr-xr-x 1 root root 14554 Aug 29 2021 /usr/bin/env_parallel.dash
-rwxr-xr-x 1 root root 6643 Aug 29 2021 /usr/bin/env_parallel.fish
-rwxr-xr-x 1 root root 12595 Aug 29 2021 /usr/bin/env_parallel.ksh
-rwxr-xr-x 1 root root 12626 Aug 29 2021 /usr/bin/env_parallel.mksh
-rwxr-xr-x 1 root root 14754 Aug 29 2021 /usr/bin/env_parallel.sh
-rwxr-xr-x 1 root root 5380 Aug 29 2021 /usr/bin/env_parallel.tcsh
-rwxr-xr-x 1 root root 12604 Aug 29 2021 /usr/bin/env_parallel.zsh
Alternatively, add it to your non-login shell startup script (e.g. ~/.bashrc) so that the parset function is available to scripts run by non-login shells.
See man parset for details.
| GNU parset is not working in a script but works in terminal |
1,582,061,970,000 |
I need to run a python script several times in parallel but I have done executing it in the background like this
ipython program.py & ipython program.py & ...
I want to know if this way uses one core per execution or just executes the program.py using threads.
By the way, I want to explore the use of GNU Parallel but the examples that I find are about commands like "cat" of "find".
How can I use GNU Parallel for executing program.py concurrently, each time in a different core?
Thanks for your help.
|
How can I use GNU Parallel for executing program.py concurrently, each time in a different core?
You (almost) never want to peg a program to a certain core. Typically you do not care which core is doing the work. And often you simply want to run one job for each CPU thread in the system.
And that is easy to do using GNU Parallel:
seq 1000 | parallel ipython program.py
This will run ipython program.py 1 .. ipython program.py 1000 but only run one job per CPU thread in parallel. So on an 8 core machine with hyperthreading (i.e. 16 CPU threads) it will start 16 jobs in parallel.
This covered in chapter 2 of https://doi.org/10.5281/zenodo.1146014 which I encourage you to spend 15 minutes reading. Your command line wiĺl love you for it.
| How to use GNU Parallel for executing a program concurrently? |
1,582,061,970,000 |
Is it possible to abort process for gnu parallel process if it exceeds an estimated runtime? For example, I have a handler for recon-all processing:
while [ -n "${ids[0]}" ] ; do
printf 'Processing ID: %s\n' "${ids[@]}" >&2
/usr/bin/time -f "$timefmt" \
printf '%s\n' "${ids[@]}" | parallel --jobs 0 recon-all -s {.} -all -
qcache -parallel -openmp 8
n=$(( n + 1 ))
ids=( "${all_ids[@]:n*4:4}" ) # pick out the next eight IDs
done
and some patients in recon-all process inside parallel couldn't be completed for some reasons (could run several days, which abnormal).
Could I limit the runtime inside parallel for 9 hours, so the command will run another group in the cycle?
|
You are looking for --timeout.
You can do --timeout 9h or you can do --timeout 1000%. The last will measure how long the median time is for a job to succeed, and given the median it will compute a timeout that is 1000% of the median run time.
The neat thing about using a percentage is that if the compute program gets faster or slower for the normal case, you will not need to change the timeout.
See it in action:
parallel --timeout 300% 'sleep {}; echo {}' ::: 100 2 3 1 50 2 3 1 2 1 3 2 1 4 2 1 2 3
# Compute program gets 10 times faster
parallel --timeout 300% 'sleep {=$_ /= 10 =}; echo {}' ::: 100 2 3 1 50 2 3 1 2 1 3 2 1 4 2 1 2 3
The median (not average) runtime is measured as the median of the succesfully completed jobs (though minimum 3). So if you have 8 jobs with job 5 being infinite, it will get killed when the runtime hits the percentage of the median timeout:
parallel --timeout 300% 'sleep {}; echo {}' ::: 1 2 1 2 100 2 1 2
This also works if the first job is the one that is stuck:
parallel --timeout 300% 'sleep {}; echo {}' ::: 100 2 1 2 1 2 1 2
The only situation it does not work is if all jobslots are stuck on their first job:
parallel -j4 --timeout 300% 'sleep {}; echo {}' ::: 100 100 100 100 1 2 1 2
| gnu parallel exit process with timeout |
1,582,061,970,000 |
I'm trying to wrap my head around running parallel in parallel, and it seems like I have a situation where that would be an ideal solution.
I want to run a set of jobs in series - call them A-1, A-2, A-3 and so on. These would be run with --jobs 1 (or sem?).
I want to run sets of those in parallel - call them A, B, C, and so on. These would be run with the default number of jobs (cores).
The “A” sets of jobs may have a different number of jobs in them than the “B” sets of jobs; similarly for C or others.
Visually, where the horizontal axis is time and the vertical is job sets:
A-1--->A-2--->A-3--->
B-1->B-2-->B-3-->B-4--->
C-1-------------C-2--->
D-1------------------>
For this, let's assume all jobs are sleep $((RANDOM % 10)).
I assume there will have to be some sort of link (ala --link) between job sets and jobs - A with 1, 2 and 3; B with 1, 2, 3 and 4; C with 1 and 2; and D with just 1, using the above visual.
This may be a better example of what I was trying to do, using @ole-tang's solution
$ declare -fp apples bananas cherries dates
apples ()
{
echo -n grannysmith fiji pinklady | parallel -d' ' -j1 'echo apples-{#}: {};sleep $((RANDOM % 3))'
}
declare -fx apples
bananas ()
{
echo -n plantain cavadish red manzano | parallel -d' ' -j1 'echo bananas-{#}: {};sleep $((RANDOM % 3))'
}
declare -fx bananas
cherries ()
{
echo -n sweet sour red yellow bing | parallel -d' ' -j1 'echo cherries-{#}: {};sleep $((RANDOM % 3))'
}
declare -fx cherries
dates ()
{
echo -n medjool khola | parallel -d' ' -j1 'echo dates-{#}: {};sleep $((RANDOM % 3))'
}
declare -fx dates
$ parallel ::: apples bananas cherries dates
bananas-1: plantain
bananas-2: cavadish
bananas-3: red
bananas-4: manzano
dates-1: medjool
dates-2: khola
apples-1: grannysmith
apples-2: fiji
apples-3: pinklady
cherries-1: sweet
cherries-2: sour
cherries-3: red
cherries-4: yellow
cherries-5: bing
|
a() {
seq 10 | parallel -j1 'echo A-{#};sleep $((RANDOM % 10))'
}
b() {
seq 10 | parallel -j1 'echo B-{#};sleep $((RANDOM % 10))'
}
c() {
seq 10 | parallel -j1 'echo C-{#};sleep $((RANDOM % 10))'
}
d() {
seq 10 | parallel -j1 'echo D-{#};sleep $((RANDOM % 10))'
}
export -f a b c d
parallel ::: a b c d
If you want to see the output:
parallel --lb ::: a b c d
It can also be done:
doit() {
seq 10 | parallel -j1 'echo '$1'-{};sleep $((RANDOM % 10))'
}
export -f doit
parallel --lb doit ::: A B C D
| Parallel in Parallel |
1,582,061,970,000 |
When I try to write a pipeline like this:
git branch | rg '^\*' | parallel git pull {}
I run into a problem with whitespace. Because the branch names have leading whitespace, parallel ends up attempting to run git pull ' foo' which is wrong.
Is there an argument for GNU Parallel that says "strip trailing/leading whitespace"? Alternatively, is there a separate program that does this?
I am aware that I could:
Use cut -c 3- but this only works if leading space is consistent
Use sed or awk, but these result in having to type a complex expression every time
|
--trim rl
git branch | rg -v '^\*' | parallel --dr --trim rl git pull {}
| Strip leading and trailing whitespace when piping to GNU parallel |
1,582,061,970,000 |
Introduction
I have a bash script to execute a command in multiple servers through ssh. It uses GNU parallel in the parallel version, a for loop in the sequential one.
The script is used like this:
foreach_server "cd $dir && find -name '*.png' | wc -l"
foreach_server "cd $dir && git --no-pager status"
Sometimes I need to have access to executables in conda environments (https://docs.conda.io/en/latest/) and the only way I found to make this work is to use an interactive shell, that is, use bash -ic before the commands I want to execute, like so, ssh $host bash -ic $cmd, so that the conda environment is loaded. This unfortunately causes two error messages on stderr, which I was not able to prevent:
bash: cannot set terminal process group (-1): Inappropriate ioctl for device
bash: no job control in this shell
So I made a filter with sed which removes these two lines from stderr and passes on the other lines in stderr:
ssh $host "$@" 2> >(sed -e "$filter1" -e "$filter2" >&2)
Problem: the sed filter makes the parallel version hang
The sed filter works fine in the sequential version, but the parallel version hangs at the end of the script, showing that the sed process is alive but doing no work. How can I prevent this?
I suspect that the problem lies in the process substitution, but I really cannot diagnose what is wrong.
Referenced script
#!/bin/bash
set -u
exit_trap() {
echo "Interrupted"
exit 1
}
trap exit_trap SIGINT
filter1='/^bash: cannot set terminal process group/d'
filter2='/^bash: no job control in this shell/d'
hosts=("host1" "host2") # more hosts in the real file
if [ -z ${serial+x} ];
then
# Parallel version ==> THIS VERSION HANGS AT THE END, AFTER ALL OUTPUT HAS BEEN WRITTEN
echo ${hosts[@]} | sed 's/ /\n/g' | parallel "echo ----- {} ----- && ssh {} \"$@\"" 2> >(sed -e "$filter1" -e "$filter2" >&2)
else
# Serial version ==> THIS VERSION WORKS FINE
for host in ${hosts[@]};
do
echo "------------------ $host ------------------"
ssh $host "$@" 2> >(sed -e "$filter1" -e "$filter2" >&2)
echo "--------------------------------------$(echo $host | sed 's/./-/g')"
done
fi
|
Rather than jump through hoops trying to remove the symptom of error messages, it would be better to remove the cause.
This will assign a tty to the ssh session so that a terminal ioctl can be applied:
ssh -t $host "$@"
You might need to double up the -t flag as -tt, depending on how you're actually calling this line.
However, the underlying issue seems to be that you need an interactive shell to set up the conda environment. The reason for this almost certainly is that it's being set up in ~/.bashrc. You can either . that explicitly or extract the relevant commands and use them in your script.
I'm not familiar with conda myself but the question How do I activate a conda environment in my .bashrc? on AskUbuntu seem to reference the relevant parts of your ~/.bashrc that you would require.
| Bash script hangs when filtering stderr through sed |
1,630,653,200,000 |
I would like to run a list of Gnuplot commands in parallel.
I'm getting an "Unrecognized option" error:
$ ./parallel-plot-sine.sh | parallel -q gnuplot
unrecognized option -e "set terminal pngcairo; set output '100.png'; set title 'Sample rate: 100'; set key left box; set autoscale; set samples 100; plot [-30:20] sin(x)"
I think this indicates that Gnuplot isn't happy with the command it's being fed, but I can't figure out why.
The shell script parallel-plot-sine.sh composes the commands that will feed into Gnu Parallel:
#!/bin/bash
# Compose command-lines to run in parallel
command_array=()
for fs in $(seq 100 100 1000); do
command_array+=("-e \"set terminal pngcairo; set output '${fs}.png'; set title 'Sample rate: $fs'; set key left box; set autoscale; set samples $fs; plot [-30:20] sin(x)\"")
done
# Print command strings to output for gnu parallel
for cmd in "${command_array[@]}"; do
printf "%s\n" "$cmd"
done
This prints:
$ ./parallel-plot-sine.sh
-e "set terminal pngcairo; set output '100.png'; set title 'Sample rate: 100'; set key left box; set autoscale; set samples 100; plot [-30:20] sin(x)"
-e "set terminal pngcairo; set output '200.png'; set title 'Sample rate: 200'; set key left box; set autoscale; set samples 200; plot [-30:20] sin(x)"
-e "set terminal pngcairo; set output '300.png'; set title 'Sample rate: 300'; set key left box; set autoscale; set samples 300; plot [-30:20] sin(x)"
-e "set terminal pngcairo; set output '400.png'; set title 'Sample rate: 400'; set key left box; set autoscale; set samples 400; plot [-30:20] sin(x)"
-e "set terminal pngcairo; set output '500.png'; set title 'Sample rate: 500'; set key left box; set autoscale; set samples 500; plot [-30:20] sin(x)"
-e "set terminal pngcairo; set output '600.png'; set title 'Sample rate: 600'; set key left box; set autoscale; set samples 600; plot [-30:20] sin(x)"
-e "set terminal pngcairo; set output '700.png'; set title 'Sample rate: 700'; set key left box; set autoscale; set samples 700; plot [-30:20] sin(x)"
-e "set terminal pngcairo; set output '800.png'; set title 'Sample rate: 800'; set key left box; set autoscale; set samples 800; plot [-30:20] sin(x)"
-e "set terminal pngcairo; set output '900.png'; set title 'Sample rate: 900'; set key left box; set autoscale; set samples 900; plot [-30:20] sin(x)"
-e "set terminal pngcairo; set output '1000.png'; set title 'Sample rate: 1000'; set key left box; set autoscale; set samples 1000; plot [-30:20] sin(x)"
These commands work on their own like this:
gnuplot -e "set terminal pngcairo; set output '1000.png'; set title 'Sample rate: 1000'; set key left box; set autoscale; set samples 1000; plot [-30:20] sin(x)"
|
GNU Parallel quotes input by default. You give input that is already quoted. There are several solutions.
Change the input from:
-e "set terminal pngcairo; set output '100.png'; set title 'Sample rate: 100'; set key left box; set autoscale; set samples 100; plot [-30:20] sin(x)"
to:
set terminal pngcairo; set output '100.png'; set title 'Sample rate: 100'; set key left box; set autoscale; set samples 100; plot [-30:20] sin(x)
and run:
... | parallel gnuplot -e
Alternatively:
... | parallel eval gnuplot
# Requires version >= 20190722
... | parallel gnuplot {=uq=}
| Running several Gnuplot commands in parallel with Gnu Parallel |
1,630,653,200,000 |
I'm using gnu parallel to generate backups for about ~1500 web sites on pantheon.io, using their terminus CLI. The terminus backup:create command does not finish until a response is received that it has completed on the remote end. I'm wondering if there is any way to better speed this up with parallel so that more sites can be backing up while waiting on previous ones to complete or just run more overall if not. If it makes any difference, this is being run from a Jenkins CI job. Thank you.
#!/bin/bash +x
backup_sites() {
BACKUP=$(terminus backup:create "$*".live)
echo "$*": "$BACKUP"
}
SITE_LIST=$(terminus site:list --field=name)
export -f backup_sites
echo "$SITE_LIST" | parallel backup_sites
|
Not specifying how many jobs to run in parallel will make it default to number of cpus.
From the manual:
-j
Number of jobslots on each machine. Run up to N jobs in parallel. 0
means as many as possible. Default is 100% which will run one job per
CPU on each machine.
In general this is a safe bet, but you are waiting on network, not computation.
So you can easily boost up the number. I would try -j 200. It should work quite well. You can tweak this parameter to get the speed you need.
So echo "$SITE_LIST" | parallel -j 200 backup_sites instead of echo "$SITE_LIST" | parallel backup_sites
| gnu parallel - speed up command agains remote servers that waits |
1,630,653,200,000 |
I have a file which is an HTML document, containing a <table> I want to extract data from and output into a csv.
This file has 544609657 characters, is about 545 megabytes, all in a single line.
I managed to extract the data into a csv by using sed and making many string replacements, but I wanted to speed things up by using GNU parallel. Is this possible, considering it's a single line file?
My attempts below have not increased processing speed nor improved memory usage:
parallel -a table.html --pipepart 'sed -e [...etc.]' > table.csv
Or
cat table.html | parallel --pipe 'sed -e [...etc.]' > table.csv
I'm guessing the problem is because the file has a single line. If so, what strategies could I used to process the file more efficiently?
|
You have exactly the correct thoughts.
You just need to learn --recstart:
parallel --pipepart --recstart '<tr>' -a big --block -10 'sed ...' > table.csv
Here we assume each row of your HTML table starts with <tr>.
| Use GNU Parallel when file has a single (long) line |
1,630,653,200,000 |
I want to run a series of parallel jobs based on a set of arguments while assigning a second argument. I use the --link option in GNU Parallel as
parallel --jobs 3 --link echo ::: A B C ::: D E F G
A D
B E
C F
A G
It perfectly works when the number of the first set of arguments is higher than the second set.
In the above example, task A has been repeated twice.
How can I avoid any repetition in the first set of arguments? In other words, the tasks are A..C and D..G are just periodic arguments for the A..C tasks.
The argument should be
A D
B E
C F
like the case when the number of the first set is higher,
parallel --jobs 3 --link echo ::: A B C H ::: D E F
A D
B E
C F
H D
|
If you do not want an input source to repeat, make the input sources the same length. Instead of:
parallel --jobs 3 --link echo ::: A B C ::: D E F G
run:
parallel --jobs 3 --link echo ::: A B C ::: D E F
Currently you can also:
parallel --jobs 3 echo ::: A B C :::+ D E F G
but this is considered a bug, so no not expect this to work in the future.
| How to limit the number of tasks to the first arguments in GNU Parallel? |
1,630,653,200,000 |
I am trying to convert/use each line from a file as an input file. For example file.txt contains the following lines:
cat
dog
lion
tiger
rabbit
Now, using the following command:
cat file.txt | parallel -j 3 "cat "/path/to/tool/toolname" --dir {}.txt --log "/path/to/output/output.txt""
where, let's assume the converted 5 files {}.txt have cat dog lion tiger and rabbit respectively, and use the separated files as an input file individually to perform some tasks as indicated above. How can I achieve this? That is how can I convert each line inside a file into an input file first?
@Ole Tange any thoughts on this. All other suggestions are welcome!!
|
cat file.txt |
parallel --pipe -n1 --cat -j 3 "cat "/path/to/tool/toolname" --dir {} --log "/path/to/output/output.txt""
Example:
cat file.txt |
parallel --pipe -n1 --cat wc {}
cat file.txt |
parallel --pipe -n1 --cat 'echo File number {#} contains;cat {}'
cat file.txt |
parallel --pipe -n1 --cat clamscan --dir {} --log sig_scan.log
| convert each line in a file into an input file |
1,630,653,200,000 |
I am running multiple commands on remote hosts. if I pass my command directly in my ssh script it's working, once I pass it from another script as an argument it gives me a result of my first host and times out on the rest as soon as it loges into them. How can I pass a command from another script and get this working, or what is the reason that my ssh script executes commands on one host only?
#Linux ssh
My_ssh_function () {
sudo sshpass -p "$1" ssh -q -o StrictHostKeyChecking=no root@"$2" "$command_linux"
}
export -f My_ssh_function
export -p command_linux
export -p file_with_targets_linux
export -p passwords
parallel -j 2 --tag My_ssh_function :::: "$passwords" "$file_with_targets_linux"
command that I am passing from a different file:
(working fine as long as I don't send it form another script to my script above)
""/sbin/dmidecode | /usr/bin/grep "Product Name:" | /usr/bin/awk '{print $4, $5}' > /tmp/result && free -h --si |grep Mem | awk '{print $2}' >> /tmp/result && dmidecode -s system-serial-number >> /tmp/result && hostname |awk -F"." '{print $1}' >> /tmp/result && cat /tmp/result |xargs""
parallel: Warning: My_ssh_function host_password_string 135.121.157.80
parallel: Warning: This job was killed because it timed out:
|
You are leaving out something from your example: The example is not complete, because GNU Parallel will never give that error if you do not have a --timeout.
That said you should test that My_ssh_function works as expected.
So try this:
# remove --tag from parallel
parallel --dryrun ... > myfile.sh
# Does myfile.sh contain what you expect now?
cat myfile.sh
# Does it run as you expect?
bash myfile.sh
If it works for some of the hosts, I guess the sshpass ... ssh fails for some hosts.
If it does not work for any host, I guess that your quoting is off.
Personally I would:
Remove sudo: It is unclear why you need to run sshpass as root.
Not use sshpass. Instead use ssh-copy-id once to make it possible for your user to log in. Use ssh-agent to still have a passphrase on your ssh key, but not having to enter it when logging in. That way a criminal who gets access to a backup of ~/.ssh will not be able to use your key.
When that is setup you should be able to let GNU Parallel do the logging in directly using --ssh and --slf and use --onall or --nonall to run the command.
When that works you might consider converting your long one-line script to a bash function and use env_parallel to copy it to the remote machine.
Something like this:
at_startup() {
# Add sshkey to sshagent unless already done
if [ -e ~/.ssh/SSH_AUTH_SOCK ] ; then
export SSH_AUTH_SOCK=`cat ~/.ssh/SSH_AUTH_SOCK`
fi
if [ -e ~/.ssh/SSH_AGENT_PID ] ; then
export SSH_AGENT_PID=`cat ~/.ssh/SSH_AGENT_PID`
fi
if ssh-add -l ; then
true
else
eval `ssh-agent` ssh-add ~/.ssh/id*[^b] &&
echo $SSH_AUTH_SOCK > ~/.ssh/SSH_AUTH_SOCK &&
echo $SSH_AGENT_PID > ~/.ssh/SSH_AGENT_PID
fi
}
setup_ssh_keys_once() {
setupone() {
sshpass -p "$1" ssh-copy-id -o StrictHostKeyChecking=no root@"$2"
}
export -f setupone
parallel setupone :::: "$passwords" "$file_with_targets_linux"
}
env_parallel --session
command_linux() {
/sbin/dmidecode | /usr/bin/grep "Product Name:" | /usr/bin/awk '{print $4, $5}' > /tmp/result &&
free -h --si |grep Mem | awk '{print $2}' >> /tmp/result &&
dmidecode -s system-serial-number >> /tmp/result &&
hostname | awk -F"." '{print $1}' >> /tmp/result &&
cat /tmp/result |xargs
}
# Yes: env_parallel can copy a function to a remote server - even if it is not exported
env_parallel --ssh 'ssh -l root' --slf "$file_with_targets_linux" --nonall command_linux
| gnu parallel: Warning: This job was killed because it timed out |
1,630,653,200,000 |
I'm trying to rsync approx 10 TB of data from a remote system to the local machine, and I want to use the parallel utility for multi-thread execution.
I want to trigger the rsync from the local server. Can someone please suggest how I may do this?
|
Can you elaborate on why this does not work:
seq -w 0 99 | parallel rsync -Havessh fooserver:src/*{}.png destdir/
From https://www.gnu.org/software/parallel/man.html#EXAMPLE:-Parallelizing-rsync
| How may I run rsync with "parallel" from the local system to fetch files in parallel? |
1,630,653,200,000 |
Suppose I have the following command to be parallelized:
my_command --file <(my | pipeline)
Now, I would like to parallelize in specific chunks:
my | pipeline | parallel --spreadstdin my_command --file <(parallel's stdin)
How would I accomplish this redirection with gnu parallel?
|
If I understand this right, parallel --spreadstdin sends the blocks of input piped to the stdin of the processes it runs, so it's not Parallel's stdin you want my_command to read from, but its own.
If my_command doesn't default to reading stdin, you can usually use /dev/stdin in place of a filename, it resolves to the same file/pipe as the "original" stdin.
So
my | pipeline | parallel --spreadstdin my_command --file /dev/stdin
should be what you want.
| GNU Parallel: redirect piped stdin as if it were a file |
1,630,653,200,000 |
Disclaimer: This is a more general question of the one I asked on biostars.org about parallel and writing to file.
When I run a program (obisplit from obitools package) sequentially, it reads one file and creates a number of files based on some criterion (not important here) in the original file:
input_file.fastq
|____ output_01.fastq
|____ output_02.fastq
|____ output_03.fastq
However, when I split the input file and run them in parallel (version from ubuntu repo: 20141022),
find . * | grep -P "^input_file" | parallel -j+3 obisplit -p output_{/.}_ -t variable_to_split_on {/}
I would expect to get files
input_file_a.fastq
|____ output_input_file_a_01.fastq
|____ output_input_file_a_02.fastq
|____ output_input_file_a_03.fastq
input_file_b.fastq
|____ output_input_file_b_01.fastq
|____ output_input_file_b_02.fastq
|____ output_input_file_b_03.fastq
input_file_c.fastq
|____ output_input_file_c_01.fastq
|____ output_input_file_c_02.fastq
|____ output_input_file_c_03.fastq
but the output is only printed to console.
Is there something inherent in parallel which causes this printing to console or could this be the way obisplit is behaving for whatever reason? Is there a way to convince each core commandeered by parallel to print to a specific file instead of the console?
|
It sound as if obisplit behaves differently if output is redirected.
You can ask GNU Parallel to output to files:
seq 10 | parallel --results output_{} echo this is input {} >/dev/null
(or if your version is older:
seq 10 | parallel echo this is input {} '>' output_{}
)
It will create output_#,output_#.err,output_#.seq.
| why is this parallel process not writing output to files but printing to console instead? |
1,630,653,200,000 |
I have a set of .txt file-pairs. In each pair of files, File1 contains a single integer and File2 contains many lines of text. In the script I'm writing, I'd like to use the integer in File1 to specify how many lines to take off the top of File2 and then write those lines to another file. I'm using gnu-parallel to run this on many file-pairs in parallel.
It seems like a simple way to do this would be to pass the contents of File1 as the parameter for the -n option of head -- is this possible? I've tried using xargs and cat File1, but neither is working.
An example file-pair:
File1:
2
File2:
AAA
BBB
CCC
DDD
Desired output:
File3:
AAA
BBB
If I were not using gnu-parallel, I could assign the contents of File1 to a variable (though I don't know if I could pass that into head's -n option?); however, parallel's {} seem to complicate this approach.
I can provide more information if needed.
|
Extending Gilles answer:
parallel 'head -n "$(cat {1})" {2}' ::: File1s* :::+ Corresponding_File2s*
You probably have a lot of File1s that you want linked to File2s. The :::+ does that.
| How to pass the contents of a file to an option/parameter of a function |
1,630,653,200,000 |
I'm executing 60 scripts with GNU parallel(they all have wget commands in there)but I have noticed that after a few hours execution will slow down a bit. What could be causing this?
I'm executing parallel with this command: parallel -j 60 < list where "list" is just a file with directories to 60 scripts.
I'm on a CentOS 6.5 machine.
|
From Understanding the Linux Kernel:
In Linux, process priority is dynamic. The scheduler keeps track of what processes are doing and adjusts their priorities periodically; in this way, processes that have been denied the use of the CPU for a long time interval are boosted by dynamically increasing their priority. Correspondingly, processes running for a long time are penalized by decreasing their priority.
| Why does parallel slow down after a while? |
1,630,653,200,000 |
Scenario:
$ cat libs.txt
lib.a
lib1.a
$ cat t1a.sh
f1()
{
local lib=$1
stdbuf -o0 printf "job for $lib started\n"
sleep 2
stdbuf -o0 printf "job for $lib done\n"
}
export -f f1
/usr/bin/time -f "elapsed time %e" cat libs.txt | SHELL=$(type -p bash) parallel --line-buffer --jobs 2 f1
$ bash t1a.sh
elapsed time 0.00
job for lib.a started
job for lib1.a started
job for lib.a done
job for lib1.a done
Here we see that elapsed time 0.00 appears before command's output. Why?
How to make elapsed time 0.00 appear after command's output?
|
Try:
cat libs.txt | SHELL=$(type -p bash) /usr/bin/time -f "elapsed time %e" parallel --line-buffer --jobs 2 f1
| Why does /usr/bin/time coupled with GNU parallel output results before command's output rather than after command's output? |
1,630,653,200,000 |
parallel --joblog /tmp/log exit ::: 1 2 3 0
cat /tmp/log
Ηow can Ι use a filter to write only failed job in my load when using GNU parallel or is there a way to get only failed jobs from the above log? I'm a beginner for this.
|
parallel --joblog /tmp/log exit ::: 1 2 3 0
cat /tmp/log
cat /tmp/log | perl -ane '$F[6] and print'
Not sure why you need this, but if you are going to retry them, you may want to read about --retry-failed --retries --resume-failed.
| How do we write only fail jobs in a log when we use GNU parallel |
1,630,653,200,000 |
I would like to split an input file on character count (ASCII is fine), combined with new lines as well. That is, every group of 10000 character should be seen as one record to be piped into the child process, but if that 10000th character does not happen to be at the end of line, the whole line should be included (and thus more that 10000 characters are provided). Each line should be considered as a single entity, which cannot be split.
Is that possible with GNU parallel (or possibly with a chain of other tools which might be useful)?
|
What you are asking for is pretty much:
seq 100000 | parallel --block 10k --pipe wc
It will pass a block around 10000 bytes to wc but will only give full lines.
It will not guarantee that the block will be at least 10 kbytes, but it will at most be one line off.
| Is it with GNU parallel possible to split on character count, but provide full lines only? |
1,630,653,200,000 |
Im trying to use GNU parallel on a script, and i noticed that it only starts to output, after -jX X jobs
# Only spawns cat after 100 seconds
(echo a; sleep 100) | parallel -j1 --lb cat
# Starts instantly
(echo a; echo a; sleep 100) | parallel -j1 --lb cat
The first job needs to be launched before the others (because it would define the other jobs as parallel pipe the output to another script), but parallel is waiting for 3 more jobs
Is there a way to change this pattern?
|
Upgrade to 20181222 or later.
# Spawns a instantly
(echo a; sleep 100) | parallel -j1 --lb cat
# Starts a and b instantly, outputs a immediately, b after 100 sec
(echo a; echo b; sleep 100) | parallel -j1 --lb cat
# Starts a and b instantly, outputs a and b immediately (but output may be mixed)
(echo a; echo b; sleep 100) | parallel -j1 --lb cat
| GNU Parallel waits for n jobs before starting |
1,630,653,200,000 |
I have a directory with files that look like this:
id1_1.txt
id1_2.txt
id2_1.txt
id2_2.txt
I need to pass these files as a couple (e.g id1_1.txt and id1_2.txt) to my_script.
Here's what I thought would work
parallel -j +0 -X python my_script.py -1 {} -2 {= s/_1/_2/ =} -o /output/dir/good /output/dir/bad ::: /my/dir/*_1.txt
where -1 would be files ending in _1.txt and -2 would be its partner ending _2.txt.
my_script recognizes the input for option -1, but not the input for option -2. Clearly, it's only looking for the initial part of option -2:
No such file or directory: '{='
I tried adding quotes, but still get the same error.
Parallel version:
$ parallel --version
GNU parallel 20120522
Copyright (C) 2007,2008,2009,2010,2011,2012 Ole Tange and Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
GNU parallel comes with no warranty.
Web site: http://www.gnu.org/software/parallel
When using GNU Parallel for a publication please cite:
O. Tange (2011): GNU Parallel - The Command-Line Power Tool,
;login: The USENIX Magazine, February 2011:42-47.
|
As steeldriver pointed out, the version of parallel that I had installed pre-dated the synatx I was using (GNU parallel - NEWS).
As a side note, the GNU parallel index lists the OLDEST versions at the top and the newest at the bottom. When I downloaded parallel on my new workspace, I didn't pay attention to this and grabbed the top .tar thinking it was the newest version.
| Syntax error when using sed to replace line-specific string in parallel: {= s/_1/_2/ =}? |
1,630,653,200,000 |
I need to run a command for each individual instance of a given variable name in parallel. Sometimes, there might be 4 variables, other times there might be 100. For example, say I have this particular dataset as:
datanames='KQPW KMMX KMKO KZAO'
I need to run a process for each which is to be run in parallel with one-another. In other words, I need to run process1 for KQPW while running process1 for KMMX while ... etc. Process1 requires input based on the variablename.
From the tutorials I have read, and some initial digging, I have installed the GNU 'parallel' command. I have put all of the datanames into a textfile called "run.txt":
KQPW.csh
KMMX.csh
KMKO.csh
KZAO.csh
wherein each of the .csh files contains the command for calling process1 with the unique variable name as the necessary input to process1. The question is, how do I run all four of these commands at once? I tried:
cat run.txt | parallel
but nothing happened. Any thoughts?
|
A quick demonstration of executing scripts based as described in parallel, without using any external tools:
#!/bin/bash
datanames='KQPW KMMX KMKO KZAO'
datanamesarray=($datanames)
for item in ${datanamesarray[@]}; do
( ./${item}.csh; sleep 10 ) &
done
echo waiting..
wait
echo done
Executing this will display waiting.. followed by a ten second delay as all of the subshells are executed in parallel. wait will pause the parent script until all subshells have terminated before proceeding. The echo, sleep, and wait statements are here for demonstrative purposes.
| Running multiple bash scripts with different names in parallel |
1,630,653,200,000 |
I need to copy a large amount of files into their own directories. The issue I am having is keeping them in order when I copy them with GNU parallel. For example, file_1.output gets placed in dir_19.
Here is what I have so far that is working, besides the order of files.
ls *.output > copy.list
parallel "mkdir cele_{}" ::: {1..10000}
parallel -k --link "cp {} cele_{}" :::: copy.list ::: {1..10000}
Is there a way to do this without sacrificing parallel?
(Inspired by https://rbt.asia/g/thread/64890073/#64890111)
|
You can use --rpl to define your own replacement string and then use that both for mkdir and cp.
ls *.output | parallel --rpl '{dir} s/\.output$/_dir/' 'mkdir {dir} && cp {} {dir}'
| Keeping dirs in order with GNU Parallel |
1,630,653,200,000 |
I use gnu parallel to run a pipe on multiple files in parallel. My code does what it should, however, if specifying the max. number of CPU (in my case 64) each job uses <5% from each CPU (based on htop ). In addition, the number of tasks and thr. (again based on htop) go through the roof which eventually kills the server. If I specify only 30 cores in gnu parallel it runs fine. Does anyone know how to max. out the power of the server?
My command is a pipe of different tools to trim genomic reads:
parallel --jobs 64 "echo -e '\n'{} processing 1>&2 ; \
gunzip -c {} | scriptA.sh | scriptB.sh -outfmt fasta \
| java -jar scriptC.jar |bgzip \
> ${output}/tmp/{/.}.filtered.tmp.fa.gz " ::: ${input} 2> ${output}/0log_parallel_stderr.log
|
As Luciano says in the comment, disk I/O is most likely the cause.
The reason for getting more processes is that your pipeline will start at least 5 processes. So you should see at least 64*5 processes being started. Some of these may also start several threads.
Parallel disk I/O is very unpredictable (See https://oletange.wordpress.com/2015/07/04/parallel-disk-io-is-it-faster/), and it is in practice impossible to say how many jobs in parallel is the optimal, because it depends on so many factors.
So to optimize your flow, I would adjust the number of jobs until you got the most throughput. You can use --joblog to help you see how long each job runs.
| gnu parallel multithreading pipe uses little CPU% but stalls server |
1,630,653,200,000 |
First of all, yes, locked into csh on a Solaris box, can't do anything about it, sorry.
I have a report batch I was running using a foreach loop. Right now it runs as a single thread and I would like to speed it up with GNU parallel. I have been trying two different approaches but hitting roadblocks on each.
Here is my current version:
if( $#argv <= 1) then
#Get today's date
set LAST = `gdate +%Y-%m-%d`
else
#use date passed in parameter
set LAST=`echo $2 | tr -d _/`;
endif
if( $#argv == 0 ) then
#default to 15 day lookback
set COUNT = 15
else
#otherwise use parameter
set COUNT = $1
endif
@ LCOUNT = $COUNT + 1 #increment by one to exclude $LAST date
#get starting date by subtracting COUNT (now incremented by 1)
set START = "`gdate --date='$LAST -$LCOUNT day' +%Y/%m/%d`";
#loop through dates, generate report string, and pipe to reportcli
foreach i (`seq $COUNT`)
set REPDATE = "`gdate --date='$START +$i day' +%Y/%m/%d`";
set FILEDATE = "`gdate --date='$START +$i day' +%Y%m%d`";
echo "runf reportname.rep -ps "$REPDATE" -pe "$REPDATE" -o report_"$FILEDATE".csv" \
| reportcli <cli params here>
end
So I would like to get this working with parallel, but as you can see I have a boatload of command expansion/substitution going on.
I tried a few different approaches, including making an array of the string passed to the reportcli, but I can't figure out how to get it to play nice.
As I see it, I have two choices:
A) one big line (have to iron out all the quoting problems to get the gdate command substitution to work):
`seq $COUNT` | parallel reportcli <cli params> < "runf reportname.rep -ps \
`gdate --date='$START +{} day' +%Y/%m/%d` -pe `gdate --date='$START +{} day' +%Y/%m/%d` \
-o report_`gdate --date='$START +${} day' +%Y%m%d`.csv"
B) Assemble a csh array beforehand, then try to expand the array (expand with echo?), pipe to parallel
set CMDLIST
foreach i (`seq $COUNT`)
set REPDATE = "`gdate --date='$START +$i day' +%Y/%m/%d`";
set FILEDATE = "`gdate --date='$START +$i day' +%Y%m%d`";
set CMDLIST = ($CMDLIST:q "runf reportname.rep -ps "$REPDATE" -pe "$REPDATE" \
-o report_"$FILEDATE".csv")
end
I know my array is good because I can do this and get back each element:
foreach j ($CMDLIST:q)
echo $j
end
but, I'm not sure how to get this to work in csh:
echo $CMDLIST | parallel --pipe "reportcli <cli params here>"
Thanks in advance!!
|
Write a script. Call that from GNU Parallel:
[... set $START and $COUNT ...]
seq $COUNT | parallel my_script.csh $START {}
my_script.csh:
#!/bin/csh
set START = $1
set i = $2
set REPDATE = "`gdate --date='$START +$i day' +%Y/%m/%d`";
set FILEDATE = "`gdate --date='$START +$i day' +%Y%m%d`";
echo "runf reportname.rep -ps "$REPDATE" -pe "$REPDATE" -o report_"$FILEDATE".csv" \
| reportcli <cli params here>
| csh array/command substitution with gnu parallel |
1,630,653,200,000 |
I have some 5 million text files under a directory - all of the same format (nothing special, just plain text files with some integers in each line). Id like to compute the maximum and minimum line count amongst all these files.
I started out by trying to write out all the line count like so (and then workout how to find the min and max from this list):
wc -l `find /some/data/dir/with/text/files/ -type f` > report.txt
but this throws me an error:
bash: /usr/bin/wc: Argument list too long
Perhaps there is a better way to go about this?
Maybe GNU-Parallel can help here somehow?
|
xargs exists to deal with this exact situation, and will work as long as the filenames involved don't contain spaces or newlines:
find /some/data/dir/with/text/files/ -type f -print | xargs wc -l
You could then sort based on the line count. If you don't care about which specific files contain the minimum and maximum lines, you could then extract the line count field from each line of output, pipe it to uniq, and then the first line of the resulting output file is the minimum line count, and the last line is the maximum line count.
This does, admittedly, involve holding on to a lot of data through the process of computing the information you're looking for, so it might be better to pipe the output of the find | xargs pipeline to an awk script that just runs through each line and then tracks if each line count is smaller than the minimum it's seen so far, or larger than the maximum it's seen so far.
| get minimum and maximum line count from files within a directory |
1,630,653,200,000 |
I can't append to an array when I use parallel, no issues using a for loop.
Parallel example:
append() { arr+=("$1"); }
export -f append
parallel -j 0 append ::: {1..4}
declare -p arr
Output:
-bash: declare: arr: not found
For loop:
for i in {1..4}; do arr+=("$i"); done
declare -p arr
Output:
declare -a arr=([0]="1" [1]="2" [2]="3" [3]="4")
I thought the first example is a translation of the for loop in functional style, so what's going on?
|
Your parallel appears to be the GNU one, which is a perl script that runs commands in parallel.
It tries very hard to tell what shell it is being invoked from so that the command that you pass to it is interpreted by that shell, but to do that it runs a new invocation of that shell in separate processes.
If you run:
bash-5.2$ env SHELLOPTS=xtrace PS4='bash-$$> ' strace -qqfe /exec,/exit -e signal=none parallel -j 0 append ::: {1..4}
execve("/usr/bin/parallel", ["parallel", "-j", "0", "append", ":::", "1", "2", "3", "4"], 0x7ffe5e848c90 /* 56 vars */) = 0
[...skipping several commands run by parallel during initialisation...]
[pid 7567] execve("/usr/bin/bash", ["/usr/bin/bash", "-c", "append 1"], 0x55a2615f03e0 /* 67 vars */) = 0
bash-7567> append 1
bash-7567> arr+=("$1")
[pid 7567] exit_group(0) = ?
[pid 7568] execve("/usr/bin/bash", ["/usr/bin/bash", "-c", "append 2"], 0x55a2615f03e0 /* 67 vars */) = 0
[pid 7568] exit_group(0) = ?
[pid 7569] execve("/usr/bin/bash", ["/usr/bin/bash", "-c", "append 3"], 0x55a2615f03e0 /* 67 vars */) = 0
bash-7568> append 2
bash-7568> arr+=("$1")
[pid 7569] exit_group(0) = ?
[pid 7570] execve("/usr/bin/bash", ["/usr/bin/bash", "-c", "append 4"], 0x55a2615f03e0 /* 67 vars */) = 0
bash-7569> append 3
bash-7569> arr+=("$1")
[pid 7570] exit_group(0) = ?
bash-7570> append 4
bash-7570> arr+=("$1")
exit_group(0) = ?
Where strace shows what commands are executed by what process and the xtrace option causes the shell to show what it does.
You'll see each bash shell appending an element to their own $arr, and then exit, and of course their own memory space including their individual $arr array is gone, the $arr array is not automagically shared between all bash shell invocations on your system.
In any case, running commands concurrently implies running them in different processes, so there's no way it can run those functions in the invoking shell, those functions will be run in new shell instances in separate processes and they will update the arr variables of those shells, not the one of the shell you run parallel from.
Given that bash has not builtin multithreading support, even if parallel was an internal command of the shell or implemented as a shell function, it would still need to run the commands in separate processes each process having their own memory. You'll find that in:
append 1 & append 2 & append 3 & wait
Or:
append 1 | append 2 | append 3
The $arr array of the parent shell is not modified either.
If you want to collect the result of each job started by parallel, you can do it via stdout or via files.
For instance:
#! /bin/bash -
do_something() {
output=$(
echo "$1: some complex computation or otherwise there would
be no point using GNU parallel and its big overhead"
)
# output the result NUL delimited.
printf '%s\0' "$output"
}
export -f do_something
readarray -td '' arr < <(
PARALLEL_SHELL=/bin/bash parallel do_something ::: {1..4}
)
typeset -p arr
(here telling parallel which shell to use for it to avoid having to guess).
Note that parallel stores the output of each shell in a temporary file and dumps them in order on stdout so you get the elements of the array in correct order.
| Unable to append to array using parallel |
1,630,653,200,000 |
So I know that to enable Passwordless SSH I need to generate a public authentication key and append it to the remote hosts ~/.ssh/authorized_keys file, generating a new SSH key pair.
The question is can I have an array of passwords and try one after another until the right password is found or do I need to have actual pairs? so knowing which host has what password.
reason: I have thousands of hosts and I don't know what host has what password but I do have all possible passwords list.
I want to use GNU parallel to ssh but for that I need a Passwordless SSH.
I guess another option would be to run in parallel my script that tries different passwords until success.
|
sshpass does that.
You can run this on a trusted system (e.g. no attackers: The passwords will be shown in cleartext if another user runs ps).
testone() { sshpass -p "$1" ssh "$2" echo OK; }
export -f testone
parallel --tag -k testone :::: passwords.txt hostlist.txt 2>/dev/null
Be aware that some systems will see this as an attack and thus lock you out for a period if you guess wrongly 3 times in a row. So you should keep track of your successes and remove them from hostlist.txt.
| Passwordless SSH, list of possible passwords to try against any given host |
1,630,653,200,000 |
How safe is it to use export in bash scripts when using GNU Parallel?
I have a parent script.
parent.sh
(echo child.sh & echo child_two.sh) || parallel bash
wait
if [[ "$STATUS1" == "0" && "$STATUS2" == "0" ]];
then
//continue
else
//stop the process
fi
child.sh
Getting the STATUS1 based on another process
export STATUS1
child_two.sh
Getting the STATUS2 based on another process
export STATUS2
Is is safe to use export and does the value of STATUS1 & STATUS2 get resrt eveytime it is being run?
|
I think you will benefit from looking into parset https://www.gnu.org/software/parallel/parset.html
$ parset myvar 'echo do stuff with {};(exit {}); echo $?' ::: 0 0 1 2 3 0 0
$ echo "${myvar[3]}"
| How safe is it to use EXPORT in bash scripts when using GNU Parallel? |
1,630,653,200,000 |
I'm using gnu parallel that reads a text file containing curl commands.
If I do ps -ef | grep -cw [c]url it shows total number of curl process at a moment.
But I want to know number of gnu parallel jobs running in each CPU core.
How to find that?
|
Taking the question literally I do not see a way you can do that: A process may run 1 ms on one core and the next ms it runs on another core.
Your comment is, however, easy to answer:
I wanna ensure all cores are involved. How do I verify that?
htop
This shows all cores are busy:
1 [|||||93.0%] 17 [|||||97.3%] 33 [|||||87.1%] 49 [|||||91.0%]
2 [|||||95.3%] 18 [|||||98.5%] 34 [|||||91.4%] 50 [|||||92.5%]
3 [|||||93.7%] 19 [|||||96.9%] 35 [|||||87.4%] 51 [|||||91.3%]
4 [|||||90.2%] 20 [|||||96.2%] 36 [|||||92.1%] 52 [|||||94.9%]
5 [|||||95.3%] 21 [|||||97.3%] 37 [|||||87.6%] 53 [|||||90.7%]
6 [|||||95.3%] 22 [|||||97.3%] 38 [|||||92.0%] 54 [|||||93.4%]
7 [|||||91.7%] 23 [|||||97.7%] 39 [|||||86.7%] 55 [|||||94.2%]
8 [|||||92.5%] 24 [|||||98.4%] 40 [|||||93.3%] 56 [|||||91.8%]
9 [|||||97.3%] 25 [|||||97.7%] 41 [|||||93.8%] 57 [|||||92.9%]
10 [|||||97.7%] 26 [|||||96.9%] 42 [|||||94.9%] 58 [|||||93.5%]
11 [|||||97.7%] 27 [|||||98.4%] 43 [|||||95.3%] 59 [|||||90.8%]
12 [|||||97.3%] 28 [|||||97.7%] 44 [|||||95.3%] 60 [|||||91.3%]
13 [|||||96.5%] 29 [|||||97.3%] 45 [|||||95.7%] 61 [|||||93.2%]
14 [|||||97.7%] 30 [|||||97.7%] 46 [|||||95.3%] 62 [|||||93.3%]
15 [|||||97.7%] 31 [|||||97.7%] 47 [|||||94.5%] 63 [|||||92.6%]
16 [|||||96.9%] 32 [|||||95.7%] 48 [|||||94.0%] 64 [|||||94.9%]
Avg[||||||||||||||||||||||||94.5%] Tasks: 302, 10084 thr; 39 running
Mem[||| 10.1G/504G] Load average: 119.46 47.95 29.59
Swp[ 0K/0K] Uptime: 00:17:23
PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command
2457 vcache 20 0 4985M 1284M 86536 S 440. 0.2 10:54.69 /usr/sbin/varn
351883 www-data 20 0 1954M 11736 3748 S 142. 0.0 1:20.80 /usr/sbin/apac
433876 www-data 20 0 1953M 11392 3748 S 132. 0.0 0:29.07 /usr/sbin/apac
411605 www-data 20 0 1889M 11424 3748 S 99.0 0.0 0:40.96 /usr/sbin/apac
2466 varnishlo 20 0 87080 84156 83468 S 42.6 0.0 1:07.94 /usr/bin/varni
371877 tange 20 0 15916 11136 3500 R 40.3 0.0 0:24.92 htop
372227 tange 20 0 25960 19820 5688 D 33.2 0.0 0:15.86 perl /usr/loca
372060 tange 20 0 25912 19816 5692 D 32.5 0.0 0:15.21 perl /usr/loca
F1Help F2Setup F3SearchF4FilterF5Tree F6SortByF7Nice -F8Nice +F9Kill F10Quit
This shows an idle machine:
1 [| 0.5%] 17 [ 0.0%] 33 [ 0.0%] 49 [ 0.0%]
2 [ 0.0%] 18 [ 0.0%] 34 [ 0.0%] 50 [ 0.0%]
3 [ 0.0%] 19 [ 0.0%] 35 [ 0.0%] 51 [ 0.0%]
4 [ 0.0%] 20 [ 0.0%] 36 [ 0.0%] 52 [ 0.0%]
5 [ 0.0%] 21 [| 0.5%] 37 [ 0.0%] 53 [ 0.0%]
6 [ 0.0%] 22 [ 0.0%] 38 [ 0.0%] 54 [ 0.0%]
7 [ 0.0%] 23 [ 0.0%] 39 [ 0.0%] 55 [ 0.0%]
8 [|||| 29.6%] 24 [ 0.0%] 40 [ 0.0%] 56 [ 0.0%]
9 [| 0.5%] 25 [| 0.5%] 41 [| 0.5%] 57 [| 0.9%]
10 [ 0.0%] 26 [| 0.5%] 42 [ 0.0%] 58 [| 0.5%]
11 [ 0.0%] 27 [|| 0.9%] 43 [ 0.0%] 59 [| 0.9%]
12 [ 0.0%] 28 [| 0.5%] 44 [ 0.0%] 60 [| 0.5%]
13 [ 0.0%] 29 [| 0.5%] 45 [| 0.5%] 61 [| 0.9%]
14 [ 0.0%] 30 [| 0.5%] 46 [| 0.5%] 62 [| 0.9%]
15 [ 0.0%] 31 [| 0.5%] 47 [| 0.5%] 63 [|| 1.4%]
16 [ 0.0%] 32 [| 0.5%] 48 [ 0.0%] 64 [| 0.9%]
Avg[|| 0.7%] Tasks: 132, 10350 thr; 1 running
Mem[||| 7.15G/504G] Load average: 1.02 21.86 20.72
Swp[ 0K/0K] Uptime: 00:15:10
PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command
2457 vcache 20 0 4913M 1191M 86520 S 0.5 0.2 7:02.69 /usr/sbin/varn
3023 mysql 20 0 2092M 377M 35696 S 0.5 0.1 0:00.50 /usr/sbin/mysq
4217 root 20 0 2366M 44548 19464 S 0.5 0.0 0:00.12 /usr/lib/snapd
371811 root 20 0 23436 5048 1892 S 0.0 0.0 0:00.03 /lib/systemd/s
371814 root 20 0 23436 5048 1892 S 0.0 0.0 0:00.03 /lib/systemd/s
371816 root 20 0 23436 5048 1892 S 0.0 0.0 0:00.03 /lib/systemd/s
371817 root 20 0 23436 5048 1892 S 0.0 0.0 0:00.03 /lib/systemd/s
371818 root 20 0 23436 5048 1892 S 0.0 0.0 0:00.03 /lib/systemd/s
F1Help F2Setup F3SearchF4FilterF5Tree F6SortByF7Nice -F8Nice +F9Kill F10Quit
| How to find the number of jobs (created by gnu parallel) running in each CPU core at any given time? |
1,630,653,200,000 |
I have written a script that finds files in directories, and brings through if statement, here is a code:
for dirname in /input/*; do
id=${dirname#/input/} # remove "/input/sub-"
id=${id%/} # remove trailing "/"
printf 'Adding ID to recon-all processing list: %s\n' "${id}" >&2
T11=`find /input/${id}/unprocessed/3T -name "*T1*MPR1*" -type f`
T12=`find /input/${id}/unprocessed/3T -name "*T1*MPR2*" -type f`
T21=`find /input/${id}/unprocessed/3T -name "*T2*SPC1*" -type f`
T22=`find /input/${id}/unprocessed/3T -name "*T2*SPC2*" -type f`
if [ -z "$T11" ] || [ -z "$T12" ] || [ -z "$T21" ] || [ -z "$T22" ]; then
recon-all -s "${id}" -i "${T11}" -i "${T12}" -i "${T21}" -i "${T22}"
elif [ -z "$T11" ] || [ -z "$T12" ] || [ -z "$T21" ]; then
recon-all -s "${id}" -i "${T11}" -i "${T12}" -i "${T21}"
elif [ -z "$T11" ] || [ -z "$T12" ] || [ -z "$T22" ]; then
recon-all -s "${id}" -i "${T11}" -i "${T12}" -i "${T22}"
elif [ -z "$T11" ] || [ -z "$T21" ]; then
recon-all -s "${id}" -i "${T11}" -i "${T21}"
elif [ -z "$T11" ] || [ -z "$T22" ]; then
recon-all -s "${id}" -i "${T11}" -i "${T22}"
else
recon-all -s "${id}" -i "${T11}" -i "${T21}"
fi
if [ -e "/output/$subj_id" ]; then
# no output file corresponding to this ID found,
# add it to he list
all_ids+=( "$subj_id" )
fi
done
The problem is that there could be different combination inside directories, and T12 and T22 could sometimes be missed, that's why I made a recon-all for every statement. How could I simplify if statements and parallel this script?
|
I wonder if you want to call recon-all with all of the non-empty variables. If that's the case, you might want this:
opts=( -s "$id" )
for val in "$T11" "$T12" "$T21" "$T22"; do
[[ -n "$val" ]] && opts+=( -i "$val" )
done
recon-all "${opts[@]}"
| Simplify and parallel bash if statement script |
1,630,653,200,000 |
I have a script which will be running in each server and copies certain files into it. Script knows where I am running and what files I need to copy.
Script will copy files from local datacenter local_dc but if it is down or not responding, then it will copy same files from remote datacenter remote_dc_1 and if that is also down, then it will copy same files from another remote datacenter remote_dc_2 as shown below -
do_Copy() {
el=$1
PRIMSEC=$2
scp david@"$local_dc":"$dir3"/new_weekly_2014_"$el"_200003_5.data "$PRIMSEC"/. || scp david@"$remote_dc_1":"$dir3"/new_weekly_2014_"$el"_200003_5.data "$PRIMSEC"/. || scp david@"$remote_dc_2":"$dir3"/new_weekly_2014_"$el"_200003_5.data "$PRIMSEC"/.
}
export -f do_Copy
# copying 5 files in parallel simultaneously in primary and secondary folder
parallel --retries 10 -j 5 do_Copy {} $PRIMARY ::: $primary_partition &
parallel --retries 10 -j 5 do_Copy {} $SECONDARY ::: $secondary_partition &
wait
echo "All files copied successfully."
Now my question is :
Let's say if local_dc machine is down, then will it definitely copy from remote_dc_1 machine for sure? And if remote_dc_1 machine is also down, then will it also copy from remote_dc_2?
Now if all three machines are down then what will happen? In this case, I want to print the message saying, all three machines are down so exiting out of the shell script.
I cannot test this properly without making the machine down.
|
Yes. It will attempt to copy from the first, if that fails it will try the second and if that fails too, it will try the third. To have it email you at the end, use this:
scp ... || scp ... || scp || echo "All attempts failed" | mailx -r "[email protected]"
To have each scp command tell you if it failed, you can use subshells:
scp ... || (echo "machine1 down" | mailx -r "[email protected]"&& scp ... ) ||
( echo "machine2 down" | mailx -r "[email protected]" && scp ... ) ||
echo "All attempts failed" | mailx -r "[email protected]"
| How to copy files from other servers if local machine is down |
1,630,653,200,000 |
Fast and simple. This command works
locate -i mymovieormysong|parallel mplayer
the song (or movie) play, but i cannot control mplayer with keyboard.
How to do (if possible) this?
Actually when i use keyboard to go forward or backward I obtain this
^[[C^[[C^[[C^[[C^[[C^[[C^[[C^[[D^[[D^[[D
Edit1: using -u (un-group) option, the output appear but when I press keyboard for control mplayer still appear [C and [D
|
I reckon it is unlikely that you want more than one mplayer running.
Normally GNU Parallel takes the tty of the process (due to process group logic). --tty hands the tty to the tty running. So if mplayer reads from the tty, then this might work:
locate -i mymovieormysong|parallel --tty mplayer
| gnu parallel: how to control output of program? |
1,395,153,498,000 |
How can I tell whether my harddrive is laid out using an MBR or GPT format?
|
With lsblk from util-linux v. 2.33 and later, one can print only the partition table type via
lsblk /dev/nvme0n1 -dno pttype
gpt
d omits children/slaves, n omits headers and o prints only the specified field.
It's quite handy since it doesn't need post-processing the output and doesn't require root access.
| GPT or MBR: How do I know? |
1,395,153,498,000 |
I'm partitioning a non-SSD hard disk with parted because I want a GPT partition table.
parted /dev/sda mklabel gpt
Now, I'm trying to create the partitions correctly aligned so I use the following command to know where the first sector begins:
parted /dev/sda unit s p free
Disk /dev/sda: 488397168s
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Number Start End Size File system Name Flags
34s 488397134s 488397101s Free Space
We can see that it starts in sector 34 (that's the default when this partition table is used).
So, to create the first partition I tried:
parted /dev/sda mkpart primary 63s 127s
to align it on sector 64 since it's a multiple of 8 but it shows:
Warning: The resulting partition is not properly aligned for best performance.
The logical and physical sector sizes in my hard disk are both 512 bytes:
cat /sys/block/sda/queue/physical_block_size
512
cat /sys/block/sda/queue/logical_block_size
512
How do I create partitions correctly aligned? What am I doing wrong?
|
In order to align partition with parted you can use --align option. Valid alignment types are:
none - Use the minimum alignment allowed by the disk type.
cylinder - Align partitions to cylinders.
minimal - Use minimum alignment as given by the disk topology information. This and the opt value will use layout information provided by the disk to align the logical partition table addresses to actual physical blocks on the disks. The min value is the minimum alignment needed to align the partition properly to physical blocks, which avoids performance degradation.
optimal Use optimum alignment as given by the disk topology information. This aligns to a multiple of the physical block size in a way that guarantees optimal performance.
Other useful tip is that you can set the size with percentages to get it aligned. Start at 0% and end at 100%. For example:
parted -a optimal /dev/sda mkpart primary 0% 4096MB
| Create partition aligned using parted |
1,395,153,498,000 |
I keep receiving this error:
Warning!! Unsupported GPT (GUID Partition Table) detected. Use GNU Parted
I want to go back to the normal MBR. I found some advice here and did:
parted /dev/sda
mklabel msdos
quit
But when I get to the mklabel option it spits out a warning that I will lose all data on /dev/sda. Is there a way to get the normal MBR back without formatting the disk?
|
That link you posted looks like a very ugly hack type solution.
However, according to the man page, gdisk, which is used to convert MBR -> GPT, also has an option in the "recovery & transformation" menu (press r to get that) to convert GPT -> MBR; the g key will:
Convert GPT into MBR and exit. This option converts as many partitions
as possible into MBR form, destroys the GPT data structures,
saves the new MBR, and exits. Use this option if you've tried GPT and
find that MBR works better for you. Note that this function generates
up to four primary MBR partitions or three primary partitions and as
many logical partitions as can be generated. Each logical
partition requires at least one unallocated block immediately
before its first block.
I'd try that first.
| Remove GPT - Default back to MBR |
1,395,153,498,000 |
I have an existing Windows 7 GPT installation, which already has a EFI System partition.
I am now trying to install a Linux on a separate harddisk, which is also GPT formatted. I did not find any working way to get grub booting without EFI system partition, so my question is:
Is it possible for grub2 to use the same EFI System partition as windows? How do I tell grub2 to use it?
To clarify my setup:
gpt /dev/sda:
1 EFI System partition created by windows (100MB)
2 "Microsoft reserved partition" (200MB)
3 Windows root (rest of disk)
gpt /dev/sdb:
# After answering my own question: this partition is not needed
1 boot partition containing grub, kernels etc.(32MB)
2 crypto LVM partition (rest of disk)
I want grub2 to use the existing /dev/sda1 EFI partition.
PS: My mainboard is EFI capable.
|
After a day of research, I can now answer my own Question: yes it is possible, and you can even use that partition as /boot and store your kernels/initramfs/etc. there.
Requirements:
Grub >= 2.00 (1.98 and 1.99 do not work)
Grub must be installed from a Linux kernel, that has support for EFI variables (CONFIG_EFI_VARS compiled in or as module efivars)
For creating the EFI boot entry you will need efibootmgr
Setup:
First mount your EFI partition to /boot
mount /dev/sdX1 /boot
If you look at the mount entry, you will see, that it is simply a FAT(32) partition. Under /boot you should find a directory efi.
As grub will call efibootmgr, you should load evivars, if it is not compiled into the kernel:
modprobe efivars
Now you can install grub:
# Replace x86_64 with i386 for 32 bit installations
grub2-install --target=x86_64-efi
Grub installs its files as usual to /boot/grub2. If everything worked correctly, you should now also have a folder /boot/efi/grub2 or /boot/efi/<name_of_your_distro>. With --bootloader-id=insert_name_here you can also specify the name for the folder yourself.
Grub calls efibootmgr automatically and creates a boot entry with that name in the EFI boot menu (in my case, that means it shows up as a bootable device in the EFI menu, not sure if this is the case on every EFI board)
Further setup does not differ from usual grub2 setup, grub2-mkconfig will add the appropriate modules for EFI to your grub.cfg.
Chainloading Windows:
As I asked for a dual boot with Windows, I will include the grub configuration for chainloading it:
Chainloading a Windows installation on EFI is slightly different from one on a MBR disk. You won't need the ntfs or part_mbr modules, instead fat and part_gpt are needed.
Also, setting root is not required, this information is stored by Windows' own boot manager. Instead specify the search command. The parameters needed for it can be determined by
grub-probe --target=hints_string /boot/efi/EFI/Microsoft/Boot/bootmgfw.efi
This will give you the parameters for search specifying the location of the EFI partition, it should look something like:
--hint-bios=hd0,gpt1 --hint-efi=hd0,gpt1 --hint-baremetal=ahci0,gpt1 1ce5-7f28
Instead of telling chainloader the number of sectors to read, you will need to set the path to Windows' EFI loader in the EFI partition. This is the same for all Windows EFI installations. The resulting entry should look like this:
menuentry "Microsoft Windows x86_64 UEFI-GPT" {
insmod part_gpt
insmod fat
insmod search_fs_uuid
insmod chain
search --fs-uuid --no-floppy --set=root <insert ouput from grub-probe here>
chainloader /efi/Microsoft/Boot/bootmgfw.efi
}
Sources: These cover some more cases, if you want to boot from EFI, they are worth reading:
Arch Wiki on Grub2
Gentoo Wiki on Grub2
| Can GRUB2 share the EFI system partition with Windows? |
1,395,153,498,000 |
In the blkid output, some lines contain UUID and PARTUUID pairs and others only PTUUID. What do they mean?
In particular, why are two IDs required for a partition and why are some partitions identified by UUID/PARTUUID and some by PTUUID?
|
UUID is a filesystem-level UUID, which is retrieved from the filesystem metadata inside the partition. It can only be read if the filesystem type is known and readable.
PARTUUID is a partition-table-level UUID for the partition, a standard feature for all partitions on GPT-partitioned disks. Since it is retrieved from the partition table, it is accessible without making any assumptions at all about the actual contents of the partition. If the partition is encrypted using some unknown encryption method, this might be the only accessible unique identifier for that particular partition.
PTUUID is the UUID of the partition table itself, a unique identifier for the entire disk assigned at the time the disk was partitioned. It is the equivalent of the disk signature on MBR-partitioned disks but with more bits and a standardized procedure for its generation.
On MBR-partitioned disks, there are no UUIDs in the partition table. The 32-bit disk signature is used in place of a PTUUID, and PARTUUIDs are created by adding a dash and a two-digit partition number to the end of the disk signature.
| What is UUID, PARTUUID and PTUUID? |
1,395,153,498,000 |
I'd like to install linux, but I don't want to risk damaging my current windows installation as I have heard a lot of horror stories. Fortunately, I have an extra hard drive. Can I install linux onto that and then dual boot windows without having to modify the windows drive?
Also, I have a UEFI "BIOS" and the windows drive is in GPT format.
|
I'm going use the term BIOS below when referring to concepts that are the same for both newer UEFI systems and traditional BIOS systems, since while this is a UEFI oriented question, talking about the "BIOS" jibes better with, e.g., GRUB documentation, and "BIOS/UEFI" is too clunky. GRUB (actually, GRUB 2 — this is often used ambiguously) is the bootloader installed by linux and used to dual boot Windows.
First, a word about drive order and boot order. Drive order refers to the order in which the drives are physically connected to the bus on the motherboard (first drive, second drive, etc.); this information is reported by the BIOS. Boot order refers to the sequence in which the BIOS checks for a bootable drive. This is not necessarily the same as the drive order, and is usually configurable via the BIOS set-up screen. Drive order should not be configurable or affected by boot order, since that would be a very OS unfriendly thing to do (but in theory an obtuse BIOS could). Also, if you unplug the first drive, the second drive will likely become the first one. We are going to use UUIDs in configuring the boot loader to try and avoid issues such as this (contemporary linux installers also do this).
The ideal way to get what you want is to install linux onto the second drive in terms of drive order and then select it first in terms of boot order using the UEFI set-up. An added advantage of this is that you can then use the BIOS/UEFI boot order to select the windows drive and bypass grub if you want. The reason I recommend linux on the second drive is because GRUB must "chainload" the Windows native bootloader, and the windows bootloader always assumes it is on the first drive. There is a way to trick it, however, if you prefer or need it the other way around.
Hopefully, you can just go ahead and use a live CD or whatever and get this done using the GUI installer. Not all installers are created equal, however, and if this gets screwed up and you are left with problems such as:
I installed linux onto the first disk and now I can't boot windows, or
I installed linux onto the second disk, but using the first disk for the bootloader, and now I can't boot anything!
Then keep reading. In the second case, you should first try and re-install linux onto the second disk, and this time make sure that's where the bootloader goes. The easiest and most foolproof way to do that would be to temporarily remove the Windows drive from the machine, since we are going to assume there is nothing extra installed on it, regardless of drive order.
Once you have linux installed and you've made sure it can boot, plug the Windows drive back in (if you removed it — and remember, we ideally want it first in terms of drive order, and the second drive first in terms of boot order) and proceed to the next step.
Accessing the GRUB configuration
Boot linux, open a terminal, and
> su root
You will be asked for root's password. From this point forward, you are the superuser in that terminal (to check, try whoami), so do not do anything stupid. However, you are still a normal user in the GUI, and since we will be editing a text file, if you prefer a GUI editor we will have to temporarily change the ownership of that file and the directory it is in:
> chown -R yourusername /etc/grub.d/
If you get "Operation not permitted", you did not su properly. If you get chown: invalid user: ‘yourusername’, you took the last command too literally.
You can now navigate to /etc/grub.d in your filebrowser and look for a file called 40_custom. It should look like this:
#!/bin/sh
exec tail -n +3 $0
# This file provides an easy way to add custom menu entries. Simply type the
# menu entries you want to add after this comment. Be careful not to change
# the 'exec tail' line above.
If you can't find it, in the root terminal enter the following commands:
> touch /etc/grub.d/40_custom
> chmod 755 /etc/grub.d/40_custom
> chown yourusername /etc/grub.d/40_custom
Open it in your text editor, copy paste the part above (starting w/ #!/bin/sh) and on to the next step.
Adding a Windows boot option
Copy-paste this in with the text editor at the end of the file:
menuentry "MS Windows" {
insmod part_gpt
insmod search_fs_uuid
insmod ntfs
insmod chain
}
This is list of modules GRUB will need to get things done (ntfs may be superfluous, but shouldn't hurt anything either). Note that this is an incomplete entry — we need to add some crucial commands.
Finding the Windows Second Stage Bootloader
Your linux install has probably automounted your Windows partition and you should be able to find it in a file browser. If not, figure out a way to make it so (if you are not sure how, ask a question on this site). Once that's done, we need to know the mount point -- this should be obvious in the file browser, e.g. /media/ASDF23SF23/. To save some typing, we're going put that into a shell variable:
win="/whatever/the/path/is"
There should be no spaces on either side of the equals sign. Do not include any elements of a Windows path here. This should point to the top level folder on the Windows partition. Now:
cd $win
find . -name bootmgfw.efi
This could take a few minutes if you have a big partition, but most likely the first thing it spits out is what we are looking for; there may be further references in the filesystem containing long gobbledygook strings — those aren't it. Use Ctrl-c to stop the find once you see something short and simple like ./Windows/Boot/EFI/bootmgfw.efi or ./EFI/HP/boot/bootmgfw.efi.
Except for the . at the beginning, remember this path for later; you can copy it into your text editor on a blank line at the bottom, since we will be using it there. If you want to go back to your previous directory now, use cd -, although it does not matter where you are in the shell from here on forward.
Setting the Right Parameters
GRUB needs to be able to find and hand off the boot process to the second stage Windows bootloader. We already have the path on the Windows partition, but we also need some parameters to tell GRUB where that partition is. There should be a tool installed on your system called grub-probe or (on, e.g., Fedora) grub2-probe. Type grub and then hit Tab two or three times; you should see a list including one or the other.
> grub-probe --target=hints_string $win
You should see a string such as:
--hint-bios=hd1,msdos1 --hint-efi=hd1,msdos1 --hint-baremetal=ahci1,msdos1
Go back to the text editor with the GRUB configuration in it and add a line after all the insmod commands (but before the closing curly brace) so it looks like:
insmod chain
search --fs-uuid --set=root [the complete "hint bios" string]
}
Don't break that line or allow your text editor to do so. It may wrap around in the display — an easy way to tell the difference is to set line numbering on. Next:
> grub-probe --target=fs_uuid $win
This should return a shorter string of letters, numbers, and possible dashes such as "123A456B789X6X" or "b942fb5c-2573-4222-acc8-bbb883f19043". Add that to the end of the search --fs-uuid line after the hint bios string, separated with a space.
Next, if (and only if) Windows is on the second drive in terms of drive order, add a line after the search --fs-uuid line:
drivemap -s hd0 hd1
This is "the trick" mentioned earlier. Note it is not guaranteed to work but it does not hurt to try.
Finally, the last line should be:
chainloader (${root})[the Windows path to the bootloader]
}
Just to be clear, for example:
chainloader (${root})/Windows/Boot/EFI/bootmgfw.efi
That's it. Save the file and check in a file browser to make sure it really has been saved and looks the way it should.
Add the New Menu Option to GRUB
This is done with a tool called grub-mkconfig or grub2-mkconfig; it will have been in that list you found with Tab earlier. You may also have a a command called update-grub. To check for that, just type it in the root terminal. If you get "command not found", you need to use grub-mkconfig directly. If not (including getting a longer error), you've just set the configuration and can skim down a bit.
To use grub-mkconfig directly, we first need to find grub.cfg:
> find /boot -name grub.cfg
This will probably be /boot/grub/grub.cfg or /boot/grub2/grub.cfg.
> grub-mkconfig -o /boot/grub/grub.cfg
update-grub will automatically scan the configuration for errors. grub-mkconfig will not, but it is important to do so because it's much easier to deal with them now than when you try to boot the machine. For this, use grub-script-check (or grub2-script-check):
> grub-script-check /boot/grub/grub.cfg
If this (or update-grub) produces an error indicating a line number, that's the line number in grub.cfg, but you need to fix the corresponding part in /etc/grub.d/40_custom (the file in your text editor). You may need to be root just to look at the former file though, so try less /boot/grub/grub.cfg in the terminal, hit :, and enter the line number. You should see your menu entry. Find the typo, correct it in the text editor, and run update-grub or grub-mkconfig again.
When you are done you can close the text editor and type exit in the terminal to leave superuser mode.
Reboot!
When you get to the grub menu, scroll down quickly (before the timeout expires, usually 5 seconds) to the "Windows" option and test it. If you get a text message error from grub, something is wrong with the configuration. If you get an error message from Windows, that problem is between you and Microsoft. Don't worry, however, your Windows drive has not been modified and you will be able to boot directly into it by putting it first (in terms of boot order) via the BIOS set-up.
When you return to linux again, return the ownership of the /etc/grub.d directory and it's contents to their original state:
sudo chmod 755 /etc/grub.d/40_custom
References
GRUB 2 Manual
Arch Linux Wiki GRUB page
Arch has some of the best documentation going, and much of it (including that page) is mostly applicable to any GNU/Linux distro.
| Dual boot windows on second harddrive, UEFI/GPT system |
1,395,153,498,000 |
Question: Should I use fdisk when creating partitions?
Or is it advisable to use parted since it uses GPT? (by default?) And with that I can create partitions larger than 2TB.
|
MBR, Master Boot Record
Wikipedia excerpt; link:
A master boot record (MBR) is a special type of boot sector at the very beginning of partitioned computer mass storage devices like fixed disks or removable drives intended for use with IBM PC-compatible systems and beyond. The concept of MBRs was publicly introduced in 1983 with PC DOS 2.0.
I intentionally copy-pasted this for you to see that MBR comes all the way from 1983.
GPT, GUID Partition Table
Wikipedia excerpt; link:
GUID Partition Table (GPT) is a standard for the layout of the partition table on a physical storage device used in a desktop or server PC, such as a hard disk drive or solid-state drive, using globally unique identifiers (GUID). Although it forms a part of the Unified Extensible Firmware Interface (UEFI) standard (Unified EFI Forum proposed replacement for the PC BIOS), it is also used on some BIOS systems because of the limitations of master boot record (MBR) partition tables, which use 32 bits for storing logical block addresses (LBA) and size information on a traditionally 512 byte disk sector.
To answer your question, I advise you to use GPT partitioning where possible; in other words, if you don't have to use MBR, use GPT instead.
GPT advantages over MBR
it has a backup partition table
it has no ridiculous limit for primary partitions, it allows for up to 128 partitions without having to extend
it also stores cyclic redundancy check (CRC) values to check that its data is intact
as you mentioned, it supports large drives, the maximum size is 8 ZiB (2^64 sectors × 2^9 bytes per sector)
The usual tools
MBR in a CLI:
fdisk (link to manual); note: fdisk from linux-utils 2.30.2 partially understands GPT now
GPT in a CLI:
gdisk (link to manual)
For both MBR and GPT in a CLI:
parted (link to manual)
For both MBR and GPT in a GUI:
gparted (link to wiki)
| Should I use fdisk for partitioning or GPT aware tools? |
1,395,153,498,000 |
Will # dd if=/dev/zero of=/dev/sda wipe out a pre-existing partition table?
Or is it the other way around, i.e, does
# fdisk /dev/sda g (for GPT)
wipe out the zeros written by /dev/zero?
|
Will dd if=/dev/zero of=/dev/sda wipe out a pre-existing partition table?
Yes, the partition table is in the first part of the drive, so writing over it will destroy it. That dd will write over the whole drive if you let it run (so it will take quite some time).
Something like dd bs=512 count=50 if=/dev/zero of=/dev/sda would be enough to overwrite the first 50 sectors, including the MBR partition table and the primary GPT. Though at least according to Wikipedia, GPT has a secondary copy of the partition table at the end of the drive, so overwriting just the part in the head of the drive might not be enough.
(You don't have to use dd, though. head -c10000 /dev/zero > /dev/sda or cat /bin/ls > /dev/sda would have the same effect.)
does fdisk /dev/sda g (for GPT) wipe out the zeros written by /dev/zero?
Also yes (provided you save the changes).
(However, the phrasing in the title is just confusing, /dev/zero in itself does not do anything any more than any regular storage does.)
| Will dd if=/dev/zero of=/dev/sda wipe out a pre-existing partition table? |
1,395,153,498,000 |
I'm using GPT as my partitioning scheme. I check the UUID's of my partitions:
# ls -l /dev/disk/by-partuuid/
total 0
lrwxrwxrwx 1 root root 10 Oct 18 22:39 0793009a-d460-4f3d-83f6-8103f8ba24e2 -> ../../sdb3
lrwxrwxrwx 1 root root 10 Oct 18 22:39 13f83c47-ad62-4932-8d52-e93626166e7f -> ../../sdc3
lrwxrwxrwx 1 root root 10 Oct 18 22:39 1b247b1f-0b7b-419e-bc3b-0f90cbadb87c -> ../../sdc2
lrwxrwxrwx 1 root root 10 Oct 18 22:39 224d5933-7a23-4833-b785-79a67c9b9306 -> ../../sda1
lrwxrwxrwx 1 root root 10 Oct 18 22:39 2ff625b2-b96b-4ce5-b752-57cdf7092258 -> ../../sda3
lrwxrwxrwx 1 root root 10 Oct 18 22:39 449956f9-7289-49ed-9a37-ed6be9264d1c -> ../../sdb1
lrwxrwxrwx 1 root root 10 Oct 18 22:39 a2a95e45-7e4c-4b20-a2bd-66d96e737590 -> ../../sdb2
lrwxrwxrwx 1 root root 10 Oct 18 22:39 a8c07f74-9d6d-4f45-9453-dd2e6998f100 -> ../../sda2
lrwxrwxrwx 1 root root 10 Oct 18 22:39 c2c9e94c-9c73-4d6e-a4ec-2229f56152e8 -> ../../sdc1
And add /dev/sdc2 using its UUID:
# mdadm /dev/md0 --add /dev/disk/by-partuuid/1b247b1f-0b7b-419e-bc3b-0f90cbadb87c
mdadm: added /dev/disk/by-partuuid/1b247b1f-0b7b-419e-bc3b-0f90cbadb87c
But when I look at the details of the RAID array it reports /dev/sdc2 instead of the UUID.
# mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Thu Oct 18 21:52:43 2012
Raid Level : raid1
Array Size : 1048564 (1024.16 MiB 1073.73 MB)
Used Dev Size : 1048564 (1024.16 MiB 1073.73 MB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Fri Oct 19 15:57:19 2012
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Name : cow:0 (local to host cow)
UUID : 9e691db1:f8fcc7d8:f56d9c11:1c202693
Events : 47
Number Major Minor RaidDevice State
2 8 34 0 active sync /dev/sdc2
1 8 18 1 active sync /dev/sdb2
Is it impossible to construct a RAID array using UUID's?
Or is it using the UUID but reports the normal name to increase the readability? In that case, how can I tell?
I would like to use UUID's so that I may move the disks around freely in the machine without breaking anything. Doesn't that sound like a good idea?
|
mdraid always allows you to move disks around freely in the machine, regardless of how you add the disk to the array. It tracks the disks by the RAID metadata (superblocks) stored on the disk.
Note that this assumes mdadm can find the disks when its assembling the arrays. The default (specified in /etc/mdadm/mdadm.conf) is normally DEVICE partitions, which means to look at all partitions (on all disks) checking for RAID superblocks. It checks for a match of the array name or UUID (depending on what you say to do in that config file), notice how both are in your --detail output.
Example:
DEVICE partitions
:
ARRAY /dev/md0 metadata=1.2 UUID=9e691db1:f8fcc7d8:f56d9c11:1c202693
When told to assemble /dev/md0, mdadm will scan all partitions on the system looking for 1.2 superblocks with the UUID 9e691db1:f8fcc7d8:f56d9c11:1c202693. It'll read the device number, etc. out of each, and use that information to assemble the array.
You would only change the DEVICE line if scanning all partitions is expensive. For example, if you have hundreds of them, over the network. Then you could list the relevant devices there, however you'd like (by UUID should work fine).
| Using UUID's with mdadm |
1,395,153,498,000 |
OS: Debian Bullseye, uname -a:
Linux backup-server 5.10.0-5-amd64 #1 SMP Debian 5.10.24-1 (2021-03-19) x86_64 GNU/Linux
I am looking for a way of undoing this wipefs command:
wipefs --all --force /dev/sda? /dev/sda
while the former structure was:
fdisk -l /dev/sda
Disk /dev/sda: 223.57 GiB, 240057409536 bytes, 468862128 sectors
Disk model: CT240BX200SSD1
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 8D5A08BF-0976-4CDB-AEA2-8A0EAD44575E
Device Start End Sectors Size Type
/dev/sda1 2048 1050623 1048576 512M EFI System
/dev/sda2 1050624 468860927 467810304 223.1G Linux filesystem
and the output of that wipefs command (is still sitting on my terminal):
/dev/sda1: 8 bytes were erased at offset 0x00000052 (vfat): 46 41 54 33 32 20 20 20
/dev/sda1: 1 byte was erased at offset 0x00000000 (vfat): eb
/dev/sda1: 2 bytes were erased at offset 0x000001fe (vfat): 55 aa
/dev/sda2: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef
/dev/sda: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54
/dev/sda: 8 bytes were erased at offset 0x37e4895e00 (gpt): 45 46 49 20 50 41 52 54
/dev/sda: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa
I might have found an article hosted on https://sysbits.org/, namely:
https://sysbits.org/undoing-wipefs/
I will quote the wipe and undo parts from there, I want to know if it's sound and I can safely execute it on my server, which I did not yet reboot, and since then trying to figure out a work-around from this hell of a typo:
wipe part
wipefs -a /dev/sda
/dev/sda: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54
/dev/sda: 8 bytes were erased at offset 0x3b9e655e00 (gpt): 45 46 49 20 50 41 52 54
/dev/sda: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa
undo part
echo -en '\x45\x46\x49\x20\x50\x41\x52\x54' | dd of=/dev/sda bs=1 conv=notrunc seek=$((0x00000200))
echo -en '\x45\x46\x49\x20\x50\x41\x52\x54' | dd of=/dev/sda bs=1 conv=notrunc seek=$((0x3b9e655e00))
echo -en '\x55\xaa' | dd of=/dev/sda bs=1 conv=notrunc seek=$((0x000001fe))
partprobe /dev/sda
Possibly alternative solution
Just now, I ran the testdisk on that SSD drive, and it found many partitions, but only these two match the original:
TestDisk 7.1, Data Recovery Utility, July 2019
Christophe GRENIER <[email protected]>
https://www.cgsecurity.org
Disk /dev/sda - 240 GB / 223 GiB - CHS 29185 255 63
Partition Start End Size in sectors
1 P EFI System 2048 1050623 1048576 [EFI System Partition] [NO NAME]
2 P Linux filesys. data 1050624 468860927 467810304
Can I / Should I just hit Write (Write partition structure to disk)? If not, why not?
|
You're lucky that wipefs actually prints out the parts it wipes.
These,
wipefs -a /dev/sda
/dev/sda: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54
/dev/sda: 8 bytes were erased at offset 0x3b9e655e00 (gpt): 45 46 49 20 50 41 52 54
/dev/sda: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa
echo -en '\x45\x46\x49\x20\x50\x41\x52\x54' | dd of=/dev/sda bs=1 conv=notrunc seek=$((0x00000200))
echo -en '\x45\x46\x49\x20\x50\x41\x52\x54' | dd of=/dev/sda bs=1 conv=notrunc seek=$((0x3b9e655e00))
echo -en '\x55\xaa' | dd of=/dev/sda bs=1 conv=notrunc seek=$((0x000001fe))
do look sensible to me in general.
But note that the offsets there are different from the ones in your case! You'll need to use the values you got from wipefs.
Based on the offset values (0x3b9e655e00 vs 0x37e4895e00), they had a slightly larger disk than you did (~256 GB vs ~240 GB). Using their values would mean that the backup GPT at the end of disk would be left broken.
That shouldn't matter much, in that any partitioning tool should be able to rewrite it as long as the first copy is intact.
But if it was the other way around, and the wrong offset you used happened to be within the size of your disk, you'd end up overwriting some random part of the drive. Not good.
Also, the magic numbers for the filesystems of course need to be in the right places.
I tested wiping and undoing it with a VFAT image, and wrote this off the top of my head before reading your version too closely:
printf "$(printf '\\x%s' 46 41 54 31 36 20 20 20)" |
dd bs=1 conv=notrunc seek=$(printf "%d" 0x00000036) of=test.vfat
that's for the single wipefs output line (repeat for others):
test.vfat: 8 bytes were erased at offset 0x00000036 (vfat): 46 41 54 31 36 20 20 20
The nested printf at the start allows to copypaste the output from wipefs, without having to manually change 46 41 54 31... to \x46\x41\x54\x31....
Again, you do need to take care to enter the correct values in the correct offsets!
It probably wouldn't be too bad to automate that further, but what with the risk involved, I'm not too keen to post such a script publicly without significant testing.
If you can, take a copy of the disk contents before messing with it.
| Undoing wipefs --all --force /dev/sda? /dev/sda |
1,395,153,498,000 |
I've got an USB pen drive and I'd like to turn it into a bootable MBR device. However, at some point in its history, that device had a GPT on it, and I can't seem to get rid of that. Even after I ran mklabel dos in parted, grub-install still complains about
Attempting to install GRUB to a disk with multiple partition labels. This is not supported yet..
I don't want to preserve any data. I only want to clear all traces of the previous GTP, preferably using some mechanism which works faster than a dd if=/dev/zero of=… to zero out the whole drive. I'd prefer a termina-based (command line or curses) approach, but some common and free graphical tool would be fine as well.
|
If you want a single command, instead of navigating interactive menus in gdisk, try:
$ sudo sgdisk -Z /dev/sdx
substituting sdx with the name of your disk in question.
(obviously - don't wipe out the partition information on your system disk ;)
| Remove all traces of GPT disk label |
1,395,153,498,000 |
I know about the advanced format and setting 2048 free sectors at the beginning of a disk. But I just converted a partition table of my disk from MS-DOS to GPT, and I noticed this:
Before:
Number Start End Size Type File system Flags
32,3kB 1049kB 1016kB Free Space
1 1049kB 31,5GB 31,5GB primary ntfs
2 31,5GB 43,0GB 11,5GB primary
3 43,0GB 44,1GB 1074MB primary linux-swap(v1)
4 44,1GB 80,0GB 36,0GB extended
5 44,1GB 54,6GB 10,5GB logical
6 54,6GB 65,0GB 10,5GB logical ext4 boot
7 65,0GB 80,0GB 15,0GB logical
80,0GB 80,0GB 56,8kB Free Space
After:
Number Start End Size File system Name Flags
17,4kB 1049kB 1031kB Free Space
1 1049kB 31,5GB 31,5GB ntfs Microsoft basic data msftdata
2 31,5GB 43,0GB 11,5GB Linux filesystem
3 43,0GB 44,1GB 1074MB linux-swap(v1) Linux swap
44,1GB 44,1GB 1049kB Free Space
5 44,1GB 54,6GB 10,5GB Linux filesystem
54,6GB 54,6GB 1049kB Free Space
6 54,6GB 65,0GB 10,5GB ext4 Linux filesystem
65,0GB 65,0GB 1049kB Free Space
7 65,0GB 80,0GB 15,0GB Linux filesystem
80,0GB 80,0GB 39,9kB Free Space
As you can see, there's 3 additional gaps there (2048 sectors), each for one extended partition. There's no gaps between 1st and 2nd, and 2nd and 3rd partition.
Does anyone know why the gaps exist only between logical partitions?
|
Partitioners like to align partitions on a mebibyte boundary these days. For MBR partitioning, there are 4 primary partitions, and for the rest you need extended and logical partitions.
While the layout of the primary partitions is expressed at the end of the first sector of the disk, for the logical partitions, you've got a linked list of additional partition tables (themselves specifying only one partitition. Typically, the first one is as the beginning of the extended partition (which is itself defined as a primary partition) and defines the first logical partition, and it links to the next partition table which defines the next logical partition. That next partition table will be located typically after the first logical partition.
All those partition tables only take a few bytes outside of the partitions, but because of the mebibyte alignment, a full mebibyte has to be used for them.
GPT on the other end stores all the partitioning information at the beginning of the disk (with a backup at the end), so after converting, that space that was used for the logical partition partition tables becomes free.
Note that you only need one sector to store those MBR logical partition tables, so strictly speaking, in MBR partitioning there would be 2047 sectors free, if the partionner was willing not to align partitions on mebibyte boundaries.
| Why are there 2048 sectors of free space between each logical partition? |
1,395,153,498,000 |
I have a 3TB drive which I have partitioned using GPT:
$ sudo sgdisk -p /dev/sdg
Disk /dev/sdg: 5860533168 sectors, 2.7 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): 2BC92531-AFE3-407F-AC81-ACB0CDF41295
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 5860533134
Partitions will be aligned on 2048-sector boundaries
Total free space is 2932 sectors (1.4 MiB)
Number Start (sector) End (sector) Size Code Name
1 2048 10239 4.0 MiB 8300
2 10240 5860532216 2.7 TiB 8300
However, when I connect it via a USB adapter, it reports a logical sector size of 4096 and the kernel no longer recognizes the partition table (since it's looking for the GPT at sector 1, which is now at offset 4096 instead of 512):
$ sudo sgdisk -p /dev/sdg
Creating new GPT entries.
Disk /dev/sdg: 732566646 sectors, 2.7 TiB
Logical sector size: 4096 bytes
Disk identifier (GUID): 2DE535B3-96B0-4BE0-879C-F0E353341DF7
Partition table holds up to 128 entries
First usable sector is 6, last usable sector is 732566640
Partitions will be aligned on 256-sector boundaries
Total free space is 732566635 sectors (2.7 TiB)
Number Start (sector) End (sector) Size Code Name
Is there any way to force Linux to recognize the GPT at offset 512? Alternatively, is there a way to create two GPT headers, one at 512 and one at 4096, or will they overlap?
EDIT: I have found a few workarounds, none of which are very good:
I can use a loopback device to partition the disk:
$ losetup /dev/loop0 /dev/sdg
Loopback devices always have a sector size of 512, so this allows me to partition the device how I want. However, the kernel does not recognize partition tables on loopback devices, so I have to create another loopback device and manually specify the partition size and offset:
$ losetup /dev/loop1 /dev/sdg -o $((10240*512)) --sizelimit $(((5860532216-10240)*512))
I can write a script to automate this, but it would be nice to be able to do it automatically.
I can run nbd-server and nbd-client; NBD devices have 512-byte sectors by default, and NBD devices are partitionable. However, the NBD documentation warns against running the nbd server and client on the same system; When testing, the in-kernel nbd client hung and I had to kill the server.
I can run istgt (user-space iSCSI target), using the same setup. This presents another SCSI device to the system with 512-byte sectors. However, when testing, this failed and caused a kernel NULL pointer dereference in the ext4 code.
I haven't investigated devmapper yet, but it might work.
|
I found a solution: A program called kpartx, which is a userspace program that uses devmapper to create partitions from loopback devices, which works great:
$ loop_device=`losetup --show -f /dev/sdg`
$ kpartx -a $loop_device
$ ls /dev/mapper
total 0
crw------- 1 root root 10, 236 Mar 2 17:59 control
brw-rw---- 1 root disk 252, 0 Mar 2 18:30 loop0p1
brw-rw---- 1 root disk 252, 1 Mar 2 18:30 loop0p2
$
$ # delete device
$ kpartx -d $loop_device
$ losetup -d $loop_device
This essentially does what I was planning to do in option 1, but much more cleanly.
| Recognizing GPT partition table created with different logical sector size |
1,395,153,498,000 |
I've seen some disk formatting/partitioning discussions that mention destroying existing GPT/MBR data structures as a first step:
sgdisk --zap-all /dev/nvme0n1
I wasn't previously aware of this, and when I've set up a disk, I've generally used:
parted --script --align optimal \
/dev/nvme0n1 -- \
mklabel gpt \
mkpart ESP fat32 1MiB 512MiB \
set 1 boot on \
name 1 boot \
mkpart primary 512MiB 100% \
set 2 lvm on \
name 2 primary
Should I have cleared things out (e.g. sgdisk --zap-all) first? What are the downsides to not having done that?
|
This advice is from the time when other tools didn't properly support GPT and were not removing all the pieces of the GPT metadata. From sgdisk man page for the --zap/--zap-all option:
Use this option if you want to repartition a GPT disk using fdisk or some other GPT-unaware program.
That's no longer true. Both fdisk and parted support GPT now and if you create a new partition table, they will remove both the two GPT headers (GPT has a backup header at the end of the disk that can cause problems when not removed) and the Protective MBR header.
That being said, it is generally not a bad idea to properly remove all headers/signatures when removing a preexisting storage layout. I personally use wipefs to remove signatures from all devices before removing them just to make sure there nothing left that could be unexpectedly discovered later -- I've been in situations where a newly created MD array or LVM logical volume suddenly has a filesystem on it just because it was created on the same (or close enough) offset where a previous device was. Storage tools usually try to detect filesystem signatures when creating a new partition/device and can wipe them for you, but doing that manually never hurts.
| Is it important to delete GPT/MBR labels before reformatting/repartitioning? |
1,395,153,498,000 |
What is the equivalent for GPT using HDDs of:
# fdisk -l /dev/hda > /mnt/sda1/hda_fdisk.info
I got this from https://wiki.archlinux.org/index.php/disk_cloning (under "Create disk image") for getting the extra hdd info which may be important for restoring or extracting from multi-partition images.
When I do this I get an error similar to:
"WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted."
|
some unix partitioner, are deperecated and GPT partition table is new and some tools doesn't work GPT. GNU parted is new and gparted is GNOME Parted
for example:
root@debian:/home/mohsen# parted -l /dev/sda
Model: ATA WDC WD7500BPVT-7 (scsi)
Disk /dev/sda: 750GB
Sector size (logical/physical): 512B/4096B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
1 32.3kB 41.1MB 41.1MB primary fat16 diag
2 41.9MB 2139MB 2097MB primary fat32 boot
3 2139MB 52.1GB 50.0GB primary ext4
4 52.1GB 749GB 697GB extended
5 52.1GB 737GB 685GB logical ext4
6 737GB 749GB 12.0GB logical linux-swap(v1)
NOTE: GPT is abbrivation of GUID Partition Table and much new.
GPT
| Getting the extra GPT info; a "fdisk -l" equivalent |
1,395,153,498,000 |
I'm just reading up on GUID partition tables, and messing around with gdisk, I see these two titles.
What is the difference between them?
I am referring to the following (emphasis mine) shown when running gdisk:
GPT fdisk (gdisk) version 0.8.7
Type device filename, or press to exit: /dev/sda
Partition table scan:
MBR: protective
BSD: not present
APM: not present
GPT: present
Found valid GPT with protective MBR; using GPT.
Command (? for help): i
Partition number (1-7): 4
Partition GUID code: EBD0A0A2-B9E5-4433-87C0-68B6B72699C7 (Microsoft basic data)
Partition unique GUID: 85E66D2F-3709-4060-938E-FFE836433CC9
First sector: 2844672 (at 1.4 GiB)
Last sector: 651208703 (at 310.5 GiB) Partition size: 648364032
sectors (309.2 GiB) Attribute flags: 0000000000000000 Partition name:
'Basic data partition'
Command (? for help):
|
The partition unique GUID is generated at the time that the partition is created. It uniquely identifies the partition at least inside the disk and probably among all the disks you own (because it's unbelievably rare for GUIDs to collide).
A partition GUID code (by which I believe you mean a partition type GUID), on the other hand, is a known, fixed GUID. It identifies the type of data inside that partition. For example, if you had a partition that contained an ordinary GNU/Linux filesystem, you would assign it a partition type GUID of 0FC63DAF-8483-4772-8E79-3D69D8477DE4 (defined as "GNU/Linux filesystem data"). If that partition was used as your /home, you would give it a GUID of 933AC7E1-2EB4-4F13-B844-0E14E2AEF915 (defined as "GNU/Linux /home"). If that partition was encrypted with, say, LUKS, you would give it a GUID of CA7D7CCB-63ED-4C53-861C-1742536059CC (defined as "LUKS partition"). And so on and so forth.
tl;dr: the partition unique GUID identifies that exact partition. The partition GUID code identifies the type of data inside that particular partition.
| What's the difference between the Partition GUID Code and Partition unique GUID? |
1,395,153,498,000 |
All of the tools I've tried until now were only capable to create a dual (GPT & MBR) partition table, where the first 4 of the GPT partitions were mirrored to a compatible MBR partition.
This is not what I want. I want a pure GPT partition table, i.e. where there isn't MBR table on the disk, and thus there isn't also any synchronizing between them.
Is it somehow possible?
|
TO ADDRESS YOUR EDIT:
I didn't notice the edit to your question until just now. As written now, the question is altogether different than when I first answered it. The mirror you describe is not in the spec, actually, as it is instead a rather dangerous and ugly hack known as a hybrid-MBR partition format. This question makes a lot more sense now - it's not silly at all, in fact.
The primary difference between a GPT disk and a hybrid MBR disk is that a GPT's MBR will describe the entire disk as a single MBR partition, while a hybrid MBR will attempt to hedge for (extremely ugly) compatibility's sake and describe only the area covered by the first four partitions. The problem with that situation is the hybrid-MBR's attempts at compatibility completely defeat the purpose of GPT's Protective MBR in the first place.
As noted below, the Protective MBR is supposed to protect a GPT-disk from stupid applications, but if some of the disk appears to be unallocated to those, all bets are off. Don't use a hybrid-MBR if it can be at all helped - which, if on a Mac, means don't use the default Bootcamp configuration.
In general, if looking for advice on EFI/GPT-related matters go nowhere else (excepting maybe a slight detour here first) but to rodsbooks.com.
ahem...
This (used to be) kind of a silly question - I think you're asking how to partition a GPT disk without a Protective MBR. The answer to that question is you cannot - because the GPT is a disk partition table format standard, and that standard specifies a protective MBR positioned at the head of the disk. See?
What you can do is erase the MBR or overwrite it - it won't prevent most GPT-aware applications from accessing the partition data anyway, but the reason it is included in the specification is to prevent non-GPT-aware applications from screwing with the partition-table. It prevents this by just reporting that the entire disk is a single MBR-type partition already, and nobody should try writing a filesystem to it because it is already allocated space. Removing the MBR removes that protection.
In any case, here's how:
This creates a 4G ./img file full of NULs...
</dev/zero >./img \
dd ibs=4k obs=4kx1k count=1kx1k
1048576+0 records in
1024+0 records out
4294967296 bytes (4.3 GB) copied, 3.38218 s, 1.3 GB/s
This writes a partition table to it - to include the leading Protective MBR.
Each of printf's arguments is followed by a \newline and written to gdisk's stdin.
gdisk interprets the commands as though they were typed at it interactively and acts accordingly, to create two GPT partition entries in the GUID Partition Table it writes to the head of our ./img file.
All terminal output is dumped to >/dev/null (because it's a lot and we'll be having a look at the results presently anyway).
printf %s\\n o y n 1 '' +750M ef00 \
n 2 '' '' '' '' \
w y | >/dev/null \
gdisk ./img
This gets pr's four-columned formatted representation of the offset-accompanied strings in the first 2K of ./img.
<./img dd count=4 |
strings -1 -td |
pr -w100 -t4
4+0 records in
4+0 records out
2048 bytes (2.0 kB) copied, 7.1933e-05 s, 28.5 MB/s
451 * 1033 K 1094 t 1212 n
510 U 1037 > 1096 e 1214 u
512 EFI PART 1039 ;@fY 1098 m 1216 x
524 \ 1044 30 1153 = 1218
529 P 1047 L 1158 rG 1220 f
531 ( 1050 E 1161 y=i 1222 i
552 " 1065 w 1165 G} 1224 l
568 V 1080 E 1170 $U.b 1226 e
573 G 1082 F 1175 N 1228 s
575 G 1084 I 1178 C 1230 y
577 y 1086 1180 b 1232 s
583 G 1088 S 1185 x 1234 t
602 Ml 1090 y 1208 L 1236 e
1024 (s* 1092 s 1210 i 1238 m
You can see where the MBR ends there, yeah? Byte 512.
This writes 512 spaces over the first 512 bytes in ./img.
<>./img >&0 printf %0512s
And now for the fruits of our labor.
This is an interactive run of gdisk on ./img.
gdisk ./img
GPT fdisk (gdisk) version 1.0.0
Partition table scan:
MBR: not present
BSD: not present
APM: not present
GPT: present
Found valid GPT with corrupt MBR; using GPT and will write new
protective MBR on save.
Command (? for help): p
Disk ./img: 8388608 sectors, 4.0 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): 0528394A-9A2C-423B-9FDE-592CB74B17B3
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 8388574
Partitions will be aligned on 2048-sector boundaries
Total free space is 2014 sectors (1007.0 KiB)
Number Start (sector) End (sector) Size Code Name
1 2048 1538047 750.0 MiB EF00 EFI System
2 1538048 8388574 3.3 GiB 8300 Linux filesystem
| How to construct a GPT-only partition table on Linux? |
1,395,153,498,000 |
A friend of mine used my USB stick to install a new version of OS X on his mac. Now that I got it back, I wanted to wipe it (I use Linux myself). However, I'm having a bit of trouble doing so. The first thing I did was write a Fedora LiveCD to it, using dd:
# dd if=Fedora.iso of=/dev/sdb
This, I thought, would overwrite the partition table which lies at the beginning of the device and consequently delete the partitions the OS X installer had created. However, I was wrong, the partitions were still there. So I looked up GUID partition tables and realized they add headers not only at the beginning of the device, but at the end, too. So I did:
$ sudo dd if=/dev/zero of=/dev/sdb
dd: writing to `/dev/sdb': No space left on device
15687681+0 records in
15687680+0 records out
8032092160 bytes (8.0 GB) copied, 1354.82 s, 5.9 MB/s
After this I removed the USB stick from the computer and plugged it back in. Running blkid now would yield no partitions on the device. However, after writing the Fedora image again, the OS X partitions are back:
$ sudo blkid
/dev/sdb1: LABEL="Fedora-17-x86_64-Live-Desktop.is" TYPE="iso9660"
/dev/sdb2: SEC_TYPE="msdos" LABEL="EFI" UUID="B368-CE08" TYPE="vfat"
/dev/sdb3: UUID="f92ff3eb-0250-303f-8030-7d063e302ccf" LABEL="Fedora 17" TYPE="hfsplus"
I suspect this has something to do with that Protective MBR bit in the wikipedia page above. How can I get rid of it?
Update
I ultimately ran parted and deleted the GPT from there. I did get spewed with warnings about a corrupted GPT (probably from zeroing it) but that "signatures" were there.
So I ultimately restored my USB stick, but it would still be nice if someone could shed some light on what exactly happened, where were those signatures stored?
|
Found the answer: the Fedora ISO contains a GUID Partition Table with a partition layout very similar to that of OS X. Because of this, I confused the partitions created by
dd if=Fedora.iso of=/dev/sdb
with the ones created by the OS X installer. The confusion was furthered by the fact that one of the partitions has a HFS+ filesystem, which is specific to OS X. What's even more curious is the fact that running parted after writing the ISO to the stick yields:
$ sudo parted /dev/sdb
GNU Parted 2.3
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) print
Warning: /dev/sdb contains GPT signatures, indicating that it has a GPT table. However, it does not have a valid fake msdos partition table, as it should. Perhaps it was corrupted --
possibly by a program that doesn't understand GPT partition tables. Or perhaps you deleted the GPT table, and are now using an msdos partition table. Is this a GPT partition table?
Yes/No?
Anyway, the point is that the partitions were not magically reinstated after zeroing the entire device, instead they were created when dd-ing the ISO.
| Where is the GUID Partition Table stored on a device? |
1,395,153,498,000 |
Background
I'm setting up a new build, with all new hardware, tabula rosa. I want to have multiple Linux installations and common data partitions.
From what I'e gathered so far, using new hardware and up-to-date kernels, I should be able to use rEFInd as a simple boot manager and use a fully modern boot process.
I've read Rod's general instructioms, but I need some more specific advice.
Question
Since disk partition editors tend to "helpfully" hide the EFI partition, how can I set that up on a new unformatted disk?
With gparted 0.16.1, I created a gpt type partition table. But, there's no indication that this is the case: the display looks no different than before or a legacy partion table in place. So did it do anything? The New partition command gives no options for the special EFI reserved partition, so did it do that automatically too?
Constraints and Assumptions
There is no existing OS, and no optical drives. Assume that any existing contents on the ssd should be blown away (junkware from the manufacturer or previous attempts to partition). I'm booting UBCD from a USB thumbdrive, so using gparted or other tools included in the Partion Magic image would be easiest.
Once I have a proper GPT disk with the special EFI partition, I'm comfortable using gparted etc. for addional partions, as I've done as long as there have been PC's with HDD's.
|
Current util-linux versions of fdisk support GPT, the one I'm looking at here is fdisk from util-linux 2.24.2 (reported via fdisk -v).
Run fdisk /dev/whatever. Have a look at the options with m. Note these change depending on the state of the partition table. First check what state the disk is currently in with p. Note the Disklabel type; if it is gpt you don't have to do anything, you can delete the existing partitions and start creating your own.
If not, use the g option. This will eliminate any existing partitions because fdisk does not convert the MBR table. You can now start adding partitions with n. For the EFI partition, use t to set the type to 1, then the table should read, e.g.,
Device Start End Size Type
/dev/sdb1 256 122096640 465.8G EFI System
Obviously that's a bit silly, but hopefully the point is clear. None of your changes take effect until you use w and exit.
| How to initialize new disk for UEFI/GPT? |
1,395,153,498,000 |
If I inspect an hybrid ISO with tools like fdisk and gdisk, then looks like hybrid ISO has both the MBR and GPT in order to support both the BIOS and UEFI:
# gdisk -l /dev/sdb
GPT fdisk (gdisk) version 0.8.10
Partition table scan:
MBR: MBR only
BSD: not present
APM: not present
GPT: present
Found valid MBR and GPT. Which do you want to use?
1 - MBR
2 - GPT
3 - Create blank GPT
Your answer:
If I inspect the disk with fdisk then it looks broken because inside a larger partition is a smaller one which should be impossible:
# fdisk -l /dev/sdb
Disk /dev/sdb: 7.5 GiB, 8036285952 bytes, 15695871 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x4a2bafa7
Device Boot Start End Sectors Size Id Type
/dev/sdb1 * 0 1284095 1284096 627M 0 Empty
/dev/sdb2 8568 9399 832 416K ef EFI (FAT-12/16/32)
#
How such hybrid MBR-GPT setups work?
|
An USB flashdrive on which the hybride iso has been written, cannot be re-partitioned with fdisk or gparted anymore because hybrid partitions (combining ISO partition, GPT and MBR partitions) confuse fdisk and gparted. it will work very well with Linux on BIOS and UEFI system, but yo cannot re-partition it again with fdisk and Gparted because it thinks that the flashdrive has invalid partitions.
If you ever need to re-partition the flash drive again, just do :
dd if=/dev/zero of=/dev/<flash-drive-device-name> bs=1M count=1
After doing this Gparted will regard you flash drive as completely empty and will offer to create a new MS-DOS partition table.
| How to understand partition table on hybrid ISO image? |
1,395,153,498,000 |
BIOS firmware can boot a BIOS formatted /boot partition installed on a software RAID 1 pair no problem. It can even boot from a /boot installed on LVM volume that lives on a software RAID 1 pair.
But with a uEFI install, /boot/efi has to be on a non md partition or the firmware can not access it.
Is this a flaw with uEFI firmware? Or is the problem with how Ubuntu sets up /boot/efi on software RAID devices? Could it be a flaw with how GPT partition tables present software RAID to the firmware?
For reference, I’m using:
Ubuntu Server 14.04.3 64bit
mdadm RAID setup from the 'Manual' option in the partitioner.
|
EFI knows how to access FAT and FAT32 filesystems. This is why your EFI boot partition has to be FAT or FAT32 formatted. EFI however does not know how to read a software RAID 1 partition, even if it is formatted using FAT32. There is a pretty simple away around this, at least using Arch Linux. When installing the system, you set the boot partition up as a FAT32 formatted raid, but you direct EFI to boot off of the individual partitions. Specifically, you do this.
mdadm --create /dev/md0 --metadata 1.0 --raid-devices=2 --level=1 /dev/sd[ab]1
mkfs.fat -F32 /dev/md0
Then proceed with the installation. As far as EFI is concerned, though, the boot partitions are /dev/sda1 and /dev/sb1 individually. You set each one up as a boot device, and then if, say, /dev/sda fails, the system will still boot from /dev/sdb1. After the system is booted the /dev/md0 RAID 1 kicks in, insuring that /dev/sda1 and /dev/sdb1 stay synchronized.
I set all my systems up like this and haven't had any problems. (Note that setting the mdadm metadata to 1.0 is necessary when installing a software raid on a boot partition.)
| Why is uEFI firmware unable to access a software RAID 1 /boot/efi partition? |
1,395,153,498,000 |
My neighbor brought over a 3TB external hard drive saying that after loaning it out to a Windows user, her Mac is asking her "to initialize something" whenever she plugs it in to her computer.
I'm using Fedora, and I'm trying to recover any data off of the drive before I let her try anything on her computer, because I have a feeling she will lose the data if she let's her computer attempt to "initialize" the drive.
I suspected the problem had something mangled with partition tables. Using fdisk I get the following output for the drive:
Disk /dev/sdd: 2.7 TiB, 3000558944256 bytes, 732558336 sectors
Units: sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: BAAE909E-8289-421C-A8D7-9DC750F0E342
Device Start End Sectors Size Type
/dev/sdd1 6 32773 32768 128M Microsoft reserved
/dev/sdd2 33024 732558079 732525056 2.7T Microsoft basic data
Usign blkid, I get this:
/dev/sdd: PTUUID="baae909e-8289-421c-a8d7-9dc750f0e342" PTTYPE="gpt"
And using parted, I get this:
Model: WD My Book 1230 (scsi)
Disk /dev/sdd: 3001GB
Sector size (logical/physical): 4096B/4096B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 24.6kB 134MB 134MB Microsoft reserved partition msftres
2 135MB 3001GB 3000GB Basic data partition msftdata
I noticed immediately that it didn't have anything for the 'File system' column. How can I get this to mount in read-only at the least, even if it's just for me, so I can copy off the files she has on there?
UPDATE 1
Using file -sL /dev/sdd* produces:
/dev/sdd: ; partition 1 : ID=0xee, start-CHS (0x0,0,2), end-CHS (0x3ff,255,63), startsector 1, 4294967295 sectors, extended partition table (last)\011
/dev/sdd1: data
/dev/sdd2: data
Trying to mount it using various partition types, using both /dev/sdd and /dev/sdd2. --
ntfs and ntfs-3g:
NTFS signature is missing.
Failed to mount '/dev/sdd2': Invalid argument
The device '/dev/sdd2' doesn't seem to have a valid NTFS.
Maybe the wrong device is used? Or the whole disk instead of a
partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around?
exfat:
FUSE exfat 1.0.1
ERROR: exFAT file system is not found.
vfat:
mount: wrong fs type, bad option, bad superblock on /dev/sdd2,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so.
UPDATE 2
The partition tables were not recoverable, I had to run a rescue to recover the data. Installing testimage and running photorec worked like a champ, I was able to get back all of the lost data.
|
It seems that the drive has been formatted by Windows - which is not surprising, since Windows definitely must have been unable to use the disk which had very likely been formatted by OS X for sole use under OS X. Now the problem is exactly the same, just with the sides swapped.
If you want to mount the Windows partition, you can try blindly guess the file system:
mount -t FILESYSTEM -o ro /dev/sdd2 /mountpoint
where FILESYSTEM is likely to be (given the size of the partition) one of NTFS, exFAT or (less likely) VFAT. For NTFS one can use either in-kernel ntfs driver (in read-only mode) or the FUSE implementation ntfs-3g. exFAT has (allegedly) working FUSE implementation; VFAT has vfat. In any case consider doing the mount with -o ro or even creating a read-only loop device for the partition and mounting that. The reason for such a cumbersome approach is that some file system drivers may update the file system even if mounted in read-only mode (usually by fiddling with metadata). Which is definitely undesirable.
If you want to try to rescue the original (read "pre-Windows") data, check the Q&As referenced by Gilles (Recovering accidentally deleted files and How to recover data from a bad SD card?) and search the internet for file system recovery for the file system used by OS X, most probably HFS Plus.
As for the general question of "initialising a disk": I believe this happens whenever the system doesn't find a partition scheme it understands - this will happen either a MBR partition table or GPT on the disk - or if no partitions it recognizes are of "the right type". This can be surprising when one is used to Linux (and I would suppose BSDs as well) which doesn't pay attention to the partition types, instead caring about the actual content only.
| Problems mounting GPT partitioned external HDD |
1,395,153,498,000 |
I've been trying all day to get my new Wheezy install completed but it fail to install Grub every time. I'm using x64 netinstall iso.
Here is my partition table:
Model: ATA ST3000DM001-1CH1 (scsi)
Disk /dev/sda: 5860533168s
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 2048s 6143s 4096s grub bios_grub
2 6144s 1953791s 1947648s boot raid
3 1953792s 31250431s 29296640s root raid
4 31250432s 5860532223s 5829281792s home raid
Disk /dev/sdb has the same partition table. You can see I've added that infamous small partition and marked it as bios_grub to give Grub space because GPT takes more than legacy msdos table.
Error message I get from installer is "Failed to install Grub to /dev/sda" (or similar).
Partitions 2, 3 and 4 make three RAID1 partitions for /boot, /, and /home. All properly selected and formatted in Debian installer.
Please help!
|
Looks like somehow Debian installer screws up the partition table. The "bios_grub" flag gets removed and becomes "raid" flag. The fix is to rework the partition table again with parted and set it back.
parted /dev/sda
set 1 bios_grub on
quit
Same for /dev/sdb, and then chrooting and installing grub with answer from this question:
How can I fix/install/reinstall grub?
| Grub fails to install - Debian Wheezy with mdadm RAID1 and GPT partition table |
1,395,153,498,000 |
I am trying to install FreeBSD in legacy mode (BIOS) on a UEFI system, since I have an Intel Iris Graphics 6100, which is from Broadwell series and is not supported yet by the intel driver, so I want to be able to use the vesa driver - which is not supported by UEFI.
I already have 2 Linux systems installed, on a GPT disk, and I started the FreeBSD live CD in legacy mode, believing (stupidly, I must say), that it would install in legacy mode, and that I would be able to boot from it in legacy mode.
So, is there a way to boot from FreeBSD in legacy mode, on a GPT disk, or to have support for Broadwell graphics card in FreeBSD while using UEFI?
|
Yes you can install FreeBSD in Legacy mode on GPT disk.
You can achieve it by creating a small partition called bios_grub (important) before installing FreeBSD , this partition is required to successfully install Grub on the Master-Boot-Record.
Some newer systems use the GUID Partition Table (GPT) format. This was specified as part of the Extensible Firmware Interface (EFI), but it can also be used on BIOS platforms if system software supports it; for example, GRUB and GNU/Linux can be used in this configuration. With this format, it is possible to reserve a whole partition for GRUB, called the BIOS Boot Partition. GRUB can then be embedded into that partition without the risk of being overwritten by other software and without being contained in a filesystem which might move its blocks around.
When creating a BIOS Boot Partition on a GPT system, you should make sure that it is at least 31 KiB in size. (GPT-formatted disks are not usually particularly small, so we recommend that you make it larger than the bare minimum, such as 1 MiB, to allow plenty of room for growth.) You must also make sure that it has the proper partition type. Using GNU Parted, you can set this using a command such as the following:
parted /dev/disk set partition-number bios_grub on
| Boot FreeBSD in legacy mode on GPT disk |
1,395,153,498,000 |
I have a disk with classic MBR and want to transform it to use GPT without data loss. I have seen several more or less useful tutorials, but most of them are dealing with the specific problems related to GRUB, the operating systems and multiple partitions on a disk. In my case, the situation is much simpler - I have a simple disk used to store data on a single partition. I discovered that simply running gdisk and pressing w writes GPT to the disk and I can mount and use it without issues afterwards.
I am worried about data corruption though, gdisk warns me that the operation I'm about to perform is potentially destructive, and I've seen diagrams on which GPT occupies some space which is normally used by the first partition. So my questions are:
Is this a good way to transform MBR to GPT?
Can GPT overwrite some data which was originally on the primary partition, thus corrupting my files or the filesystem?
|
I created an MBR disk with one partition, filled every single byte on that partition with data, created a SHA1 checksum of the whole partition, converted it to GPT as described in the question, created yet another checksum and compared it with the original. They were the same. So my conclusion is this:
You can safely convert a disk to GPT without corrupting the data.
Warning: This does not mean the procedure is safe. It might corrupt your partitions. Always make a backup before converting using this approach.
| Transforming a disk from MBR to GPT |
1,395,153,498,000 |
we have BBB based custom board with 256MB RAM and 4GB eMMC,
I have partitioned it using below code,
parted --script -a optimal /dev/mmcblk0 \
mklabel gpt \
mkpart primary 128KiB 255KiB \
mkpart primary 256KiB 383KiB \
mkpart primary 384KiB 511KiB \
mkpart primary 1MiB 2MiB \
mkpart primary 2MiB 3MiB \
mkpart primary 3MiB 4MiB \
mkpart primary 4MiB 5MiB \
mkpart primary 5MiB 10MiB \
mkpart primary 10MiB 15MiB \
mkpart primary 15MiB 20MiB \
mkpart primary 20MiB 21MiB \
mkpart primary 21MiB 22MiB \
mkpart primary 22MiB 23MiB \
mkpart primary 23MiB 28MiB \
mkpart primary ext4 28MiB 528MiB \
mkpart primary ext4 528MiB 1028MiB \
mkpart primary ext4 1028MiB 1128MiB \
mkpart primary ext4 1128MiB 1188MiB \
mkpart primary ext4 1188MiB 2212MiB \
mkpart primary ext4 2212MiB 2603MiB \
mkpart primary ext4 2603MiB 2639MiB \
mkpart primary ext4 2639MiB 100% \
And then formatted file system partitions using below command
mkfs.ext4 -j -L $LABEL $PARTITION
Now when I read file system block size using tune2fs, I see different value for partitions less than 1GiB and partition greater or equal to 1GiB partition.
# tune2fs -l /dev/mmcblk0p15 | grep Block
Block count: 512000
Block size: 1024
Blocks per group: 8192
#
#
# tune2fs -l /dev/mmcblk0p16 | grep Block
Block count: 512000
Block size: 1024
Blocks per group: 8192
#
#
# tune2fs -l /dev/mmcblk0p19 | grep Block
Block count: 262144
Block size: 4096
Blocks per group: 32768
# tune2fs -l /dev/mmcblk0p22 | grep Block
Block count: 1191936
Block size: 4096
Blocks per group: 32768
I am not able to understand why block sizes are different.
moreover mke2fs.conf is having all default values only and blocksize mentioned there is 4096.
[defaults]
base_features = sparse_super,filetype,resize_inode,dir_index,ext_attr
default_mntopts = acl,user_xattr
enable_periodic_fsck = 0
blocksize = 4096
inode_size = 256
inode_ratio = 16384
[fs_types]
ext3 = {
features = has_journal
}
ext4 = {
features = has_journal,extent,huge_file,flex_bg,uninit_bg,dir_nlink,extra_isize
auto_64-bit_support = 1
inode_size = 256
}
ext4dev = {
features = has_journal,extent,huge_file,flex_bg,uninit_bg,dir_nlink,extra_isize
inode_size = 256
options = test_fs=1
}
small = {
blocksize = 1024
inode_size = 128
inode_ratio = 4096
}
floppy = {
blocksize = 1024
inode_size = 128
inode_ratio = 8192
}
big = {
inode_ratio = 32768
}
huge = {
inode_ratio = 65536
}
news = {
inode_ratio = 4096
}
largefile = {
inode_ratio = 1048576
blocksize = -1
}
largefile4 = {
inode_ratio = 4194304
blocksize = -1
}
hurd = {
blocksize = 4096
inode_size = 128
}
Can someone explain/suggest a doc/hint why block sizes are different for different partitions?
|
As @derobert mentioned in the comment.
mkfs.ext4/mke2fs refers to /etc/mke2fs.conf and formats the partition.
mke2fs chooses block size based on the partition size if not explicitly mentioned. Read -b block-size and -T usage-type in mke2fs man page for the same.
So when partition size is less than 512MB mkfs.ext4 formats it as small with following settings from mke2fs.conf file.
small = {
blocksize = 1024
inode_size = 128
inode_ratio = 4096
}
However when partition size is more than 512MB mkfs.ext4 or mke2fs formats partition using defaults from mke2fs.conf file
[defaults]
base_features = sparse_super,filetype,resize_inode,dir_index,ext_attr
default_mntopts = acl,user_xattr
enable_periodic_fsck = 0
blocksize = 4096
inode_size = 256
inode_ratio = 16384
That's what was causing different block sizes in the different partitions for me.
One more note. To get total number of inode you will get after formatting can be calculated as follows,
Total number of inodes = partition size / inode_ratio
e.g.
for 500MB partition
total number of inodes = (500 * 1024 * 1024) / 4096
= 128000
NOTE: I think I am missing something here, because for the calculations I have shown above, actual value shown by tune2fs is Inode count: 128016 which nearly matches but not exact.
| File system block size differs between different ext4 partitions |
1,395,153,498,000 |
I have a Debian Jessie (3.16.7-ckt20-1+deb8u3) system with RAID1 on 2x 3TB hard drives. Grub can't be installed into MBR on drives >2TB, thus I have GPT with 1MB bios partition:
Device Start End Sectors Size Type
/dev/sda1 2048 4095 2048 1M BIOS boot
/dev/sda2 4096 1953128447 1953124352 931.3G Linux RAID
/dev/sda3 1953128448 5860532223 3907403776 1.8T Linux RAID
After rebooting (kernel upgraded deb8u2 -> deb8u3) system ended up in initramfs rescue:
Loading, please wait...
mdadm: No device listed in conf file were found.
Gave up waiting for root device. Common problems:
- Boot args (cat /proc/cmdline)
- Check rootdelay= (did the system wait long enough?)
- Check root= (did the system wait for the right device?)
- Missing modules (cat /proc/modules; ls /dev)
ALERT! /dev/disk/by-uuid/5887d2e0-bae1-4ce8-ac6f-168fb183d7b0 does not exist.
Dropping to a shell!
modprobe: module ehci-orion not found in modules.dep
BusyBox v1.22.1 (Debian 1:1.22.0-9+deb8u1) built-in shell (ash)
Enter 'help' for a list of built-in commands.
/bin/sh: can't access tty; job control turned off
(initramfs)
From the console I'm able to check that the RAID array seems to be OK:
cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sda3[0] sdb3[1]
1953570816 blocks super 1.2 [2/2] [UU]
bitmap: 0/15 pages [0KB], 65536KB chunk
md1 : active raid1 sda2[0] sdb2[1]
976431104 blocks super 1.2 [2/2] [UU]
bitmap: 0/8 pages [0KB], 65536KB chunk
unused devices: <none>
the missing disk is the md1 device which is not present in /dev/md/ with root partition. Also config file /etc/mdadm/mdadm.conf show same content as mdadm --examine --scan:
$ mdadm --examine --scan
ARRAY /dev/md/1 metadata=1.2 UUID=c366b4e9:e33d2b69:3c738749:07b022c6 name=w02:1
ARRAY /dev/md/2 metadata=1.2 UUID=c32939b8:bc01f4ff:b85f00c6:b50aa29e name=w02:2
Using mdadm --examine /dev/sda2 I've check that all RAID partitions are in clean state (AA). Is there something more I can do?
Can I try to continue in manual booting? How to do that? How would I increase rootdelay= for next reboot? (the system waited for the right device, it's not the second suggested problem).
|
If you simply exit from the rescue shell the system will try to continue to boot. If you need to increase rootdelay you can add it to your kernel options in /etc/grub/default and run update-grub.
| mdadm: no devices listed in conf file were found - Debian 8 with GPT |
1,395,153,498,000 |
My current idea is to create one software array, class RAID-6, with 4 member drives, using mdadm.
Specifically, the drives would be 1 TB HDDs on SATA in a small server Dell T20.
Operating System is GNU/Linux Debian 8.6 (later upgraded: Jessie ⟶ Stretch ⟶ Buster)
That would make 2 TB of disk space with 2 TB of parity in my case.
I would also like to have it with GPT partition table, for that to work, I am unsure how to proceed specifically supposing I would prefer to do this purely over the terminal.
As I never created a RAID array, could you guide me on how I should proceed?
Notes:
This array will serve for the sole data only. No boot or OS on it.
I opted for RAID-6 due to the purpose of this array. Two drive failures the array must be able to survive. Since I am limited by hardware to 4 drives, there is no alternative to RAID-6 that I know of. (However ugly the RAID-6 slowdown may seem, it does not matter in this array.)
|
In this answer, let it be clear that all data will be destroyed on all of the array members (drives), so back it up first!
Open terminal and become root (su); if you have sudo enabled, you may also do for example sudo -i; see man sudo for all options):
sudo -i
First, we should erase the drives, if there was any data and filesystems before, that is. Suppose we have 4 members: sdi, sdj, sdk, sdl. For the purpose of having feedback of this process visually, the pv (pipe viewer) was used here:
pv < /dev/zero > /dev/sdi
pv < /dev/zero > /dev/sdj
pv < /dev/zero > /dev/sdk
pv < /dev/zero > /dev/sdl
Alternatively, to just check if there is nothing left behind, you may peek with GParted on all of the drives, and if there is any partition with or without any filesystem, wiping it could be enough, though I myself prefer the above zeroing all of the drives involved, remember to un-mount all partitions before doing so, it could be done similar to these one-liners:
umount /dev/sdi?; wipefs --all --force /dev/sdi?; wipefs --all --force /dev/sdi
umount /dev/sdj?; wipefs --all --force /dev/sdj?; wipefs --all --force /dev/sdj
umount /dev/sdk?; wipefs --all --force /dev/sdk?; wipefs --all --force /dev/sdk
umount /dev/sdl?; wipefs --all --force /dev/sdl?; wipefs --all --force /dev/sdl
Then, we initialize all drives with GUID partition table (GPT), and we need to partition all of the drives, but don't do this with GParted, because it would create a filesystem in the process, which we don't want, use gdisk instead:
gdisk /dev/sdi
gdisk /dev/sdj
gdisk /dev/sdk
gdisk /dev/sdl
In all cases use the following:
o Enter for new empty GUID partition table (GPT)
y Enter to confirm your decision
n Enter for new partition
Enter for default of first partition
Enter for default of the first sector
Enter for default of the last sector
fd00 Enter for Linux RAID type
w Enter to write changes
y Enter to confirm your decision
You can examine the drives now:
mdadm --examine /dev/sdi /dev/sdj /dev/sdk /dev/sdl
It should say:
(type ee)
If it does, we now examine the partitions:
mdadm --examine /dev/sdi1 /dev/sdj1 /dev/sdk1 /dev/sdl1
It should say:
No md superblock detected
If it does, we can create the RAID6 array:
mdadm --create /dev/md0 --level=6 --raid-devices=4 /dev/sdi1 /dev/sdj1 /dev/sdk1 /dev/sdl1
We should wait until the array is fully created, this process we can easily watch:
watch cat /proc/mdstat
After the creation of the array, we should look at its detail:
mdadm --detail /dev/md0
It should say:
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Now we create a filesystem on the array, if you use ext4, the below hidden command is better to be avoided, because of ext4lazyinit would take noticeable amount of time in case of a large array, hence the name, "lazyinit", therefore I recommend you to avoid this one:
mkfs.ext4 /dev/md0
Instead, you should force a full instant initialization (with 0% reserved for root as it is a data array):
mkfs.ext4 -m 0 -E lazy_itable_init=0,lazy_journal_init=0 /dev/md0
By specifying these options, the inodes and the journal will be initialized immediately during creation, useful for larger arrays.
If you chose to take a shortcut and created the ext4 filesystem with the "better avoided command", note that ext4lazyinit will take noticeable amount of time to initialize all of the inodes, you may watch it until it is done, e.g. with iotop or nmon.
Either way you choose to make the file system initialization, you should mount it after it has finished its initialization.
We now create some directory for this RAID6 array:
mkdir -p /mnt/raid6
And simply mount it:
mount /dev/md0 /mnt/raid6
Since we are essentially done, we may use GParted again to quickly check if it shows linux-raid filesystem, together with the raid flag on all of the drives.
If it does, we properly created the RAID6 array with GPT partitions and can now copy files on it.
See what UUID the md filesystem has:
blkid /dev/md0
Copy the UUID to clipboard.
Now we need to edit fstab, with your favorite text editor, I used nano, though sudoedit might better be used:
nano /etc/fstab
And add add an entry to it:
UUID=<the UUID you have in the clipboard> /mnt/raid6 ext4 defaults 0 0
I myself do not recommend using defaults set of flags, I merely wanted the line not to be overly complex.
Here is what mount flags I use on a UPS backed-up data RAID (instead of defaults):
nofail,nosuid,nodev,noexec,nouser,noatime,auto,async,rw,data=journal,errors=remount-ro
You may check if it is correct after you save the changes:
mount -av | grep raid6
It should say:
already mounted
If it does, we save the array configuration; in case you don't have any md device yet created, you can simply do:
mdadm --detail --scan >> /etc/mdadm/mdadm.conf
In case there are arrays already existent, just run the previous command without redirection to the config file:
mdadm --detail --scan
and add the new array to the config file manually.
In the end, don't forget to update your initramfs, because otherwise your new array will only auto read-only assemble, probably as /dev/md127 or similar:
update-initramfs -u -k all
Check if you did everything according to plan, and if so, you may restart:
reboot
| mdadm RAID implementation with GPT partitioning |
1,395,153,498,000 |
I was fiddling around with parted command on a loopback disk and tried to create some partitions using gpt part table but I keep getting Error: Unable to satisfy all constraints on the partition. when trying to create a logical partition
$ sudo parted /dev/loop0
(parted) mktable gpt
(parted) mkpart primary 1MiB 201MiB
(parted) mkpart extended 201MiB -0MiB
(parted) unit MiB print
Model: Loopback device (loop)
Disk /dev/loop0: 102400MiB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Number Start End Size File system Name Flags
1 1.00MiB 201MiB 200MiB primary
2 201MiB 102400MiB 102199MiB extended
(parted) mkpart logical 202MiB 1024MiB
Error: Unable to satisfy all constraints on the partition.
Recreating the same partitions using msdos part table doesn't give such error, though. So any idea what's wrong?
% sudo parted /dev/loop0
GNU Parted 2.3
Using /dev/loop0
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mktable msdos
(parted) mkpart primary 1MiB 201MiB
(parted) mkpart extended 201MiB -0MiB
(parted) mkpart logical 202MiB 1024MiB
(parted) unit MiB print
Model: Loopback device (loop)
Disk /dev/loop0: 102400MiB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Number Start End Size Type File system Flags
1 1.00MiB 201MiB 200MiB primary
2 201MiB 102400MiB 102199MiB extended lba
5 202MiB 1024MiB 822MiB logical
|
The extended and logical partitions make sense only with msdos partition table. It's only purpose is to allow you to have more than 4 partitions. With GPT, there are only 'primary' partitions and their number is usually limited to 128 (however, in theory there is no upper limit implied by the disklabel format). Note that on GPT none of the partitions could overlap (compare to msdos where extended partition is expected to overlap with all contained logical partitions, obviously).
Next thing about GPT is that partitions could have names, and here comes the confusion: the mkpart command has different semantics depending on whether you use GPT or msdos partition table.
With msdos partition table, the second argument to mkpart is partition type (primary/logical/extended), whereas with GPT, the second argument is the partition name. In your case it is 'primary' resp. 'extended' resp. 'logical'. So parted created two GPT partitions, first named 'primary' and second with name 'extended'. The third partition which you tried to create (the 'logical' one) would overlap with the 'extended', so parted refuses to do it.
In short, extended and logical partitions do not make sense on GPT. Just create as many 'normal' partitions as you like and give them proper names.
| Unable to create logical partition with Parted |
1,395,153,498,000 |
I had a gpt-partitioned drive, with unpartitioned space at the end, I used dd to clone it to another smaller drive. Unfortunately Linux won't see the partitions on the cloned drive.
My understanding is that GPT has two copies of the partition table, the primary copy at the start just after the MBR table, and the secondary one at the end. So it should be possible to fix the partitioning on the cloned drive, what if-any tools can be used to do this?
|
gdisk was able to fix the drive. It displayed some warnings, but was able to correctly read the primary copy of the GPT, adjust the location of the secondary GPT, and write the partition table back to the disk.
I also tried fdisk and gparted, but neither of them was able to correctly handle the drive. fdisk only saw the protective MBR. gparted said that the backup GPT was corrupt and it was using the primary one, but then failed to see any of the partitions on the drive.
| repair gpt after cloning to smaller drive |
1,395,153,498,000 |
What will happen to all the remaining partition labels if I remove a single partition? For example if I have a layout that looks like this:
/dev/sda1
/dev/sda2
/dev/sda3
/dev/sda4
/dev/sda5
and if I remove /dev/sda2 will /dev/sda3, /dev/sda4 and /dev/sda5 "shift" their numbers, and am I going to get this:
/dev/sda1
/dev/sda2
/dev/sda3
/dev/sda4
or will the "gap" stay there without any changes for the labels, giving me this:
/dev/sda1
/dev/sda3
/dev/sda4
/dev/sda5
|
Traditionally, Linux on x86 hardware has used MSDOS partition tables. In this case, removing /dev/sda2 won't shift any of the higher numbered partitions down, because the primary partitions act like "slots": you can use them in any order you like, and removing one doesn't affect any of the others.
If instead you had sda{1-7} with sda4 being the extended partition and sda{5-7} being logical partitions within that extended partition, deleting sda6 would shift sda7 down. Logical partitions simply behave differently in this regard.
Newer versions of Linux are switching to GPT partition tables instead, though this is a slow process since there are limitations that prevent wholesale switching at this time.
In the GPT case, you don't need to use extended partitions to get more than 4 partitions on a single disk, and like MSDOS primary partitions, GPT partition numbers work like slots. You can delete a partition from the middle of a range and only leave a hole, with the existing partitions keeping their number. If you then create a new one, it fills the hole.
Your question asks about partition labels, however, and nothing I've talked about so far has anything to do with labels. Partition labels, in the sense used in Linux, are attributes of the filesystem, not the partition table. They exist to prevent changes to device names from causing problems with mounting filesystems. By using filesystem labels, you don't have to worry about device name changes because you're mounting partitions by label, not by device name. This is particularly helpful in cases like USB, where the device naming scheme is dynamic, and depends in part on what has been plugged in previously since the last reboot.
Linux mkfs.* programs typically use the -L flag to specify the label.
To mount a partition by label instead of by device name, use LABEL=mypartname in the first column of /etc/fstab. If you check your current /etc/fstab, you'll probably find that there are already partitions being mounted that way. Linux GUI installers typically do this for you as a convenience.
You can mount a filesystem by label interactively, too, by passing the label with -L to mount(8).
GPT does allow you to name a partition, but I don't know that it has anything to do with anything discussed above.
EDIT: One thing you do get with GPT which is relevant here, however, is a unique identifier for each partition, called a UUID. They work similarly to labels, but are different in several ways:
UUIDs are automatically assigned pseudorandom numbers, rather than a logical name you pick yourself.
You use -U instead of -L to mount(8) a partition by UUID rather than by label.
You use UUID=big-ugly-hex-number instead of LABEL=mynicelabel in /etc/fstab.
They are attributes of the partition, not the filesystem, so they will work with any filesystem as long as you can use GPT. A good example is a FAT32 partition on a USB stick: FAT32 doesn't have a filesystem label, and since it's on a USB stick you can't reliably predict which /dev/sd* name it will get.
| What happens to partition labels after removing a partition? |
1,395,153,498,000 |
When I use cfdisk to create a new partition, I usually change its type to Linux filesystem. There's multiple types for most operating systems, but a very large number for Linux (architecture-specific for root, /usr, and something called “verity”?).
But isn't /etc/fstab the file that gives meaning to these partitions? Why should I make my swap partition type Linux swap and my root partition type Linux root (x86-64)?
|
The idea behind all these different partition type GUIDs is that they can be used to mount a system’s volumes without /etc/fstab. The partition types are defined in the discoverable partitions specification. With systemd, this is handled by systemd-gpt-auto-generator.
The general idea behind this is to be able to build systems with no system-specific information in /etc, so that a single static image can be used reliably without needing any customisation. (Obviously this needs a fair bit more than partition type GUIDs, but that’s the driver.) See also Lennart Poettering’s blog post on the topic.
| What is the significance of GPT's "Partition Type GUIDs"? |
1,501,411,620,000 |
Under the MBR model, we could create four primary partitions one of which could an extended partition that's further subdivided into logical partitions.
Consider this GPT schematic taking from Wikipedia:
Partition entries range from LBA 1 to LBA 34, presumably we ran out of that space and I understand that's a fair amount of partitions, is it possible to make an extended partition if the disk partitioned with GPT? If possible how many extended partitions per GPT partition table we can make?
I'm not sure if this is a standard to have partition entries within the range LBA 1 to LBA 34, maybe we could expand partition entries beyond that?
Practically this is a fair amount of partitions, I have no intention to do that.
|
128 partitions is the default limit for GPT, and it's probably painful in practice to use half that many...
Linux itself originally also had some limitations in its device namespace. For /dev/sdX it assumes no more than 15 partitions (sda is 8,0, sdb is 8,16, etc.). If there are more partitions, they will be represented using 259,X aka Block Extended Major.
You could certainly still do more partitions in various ways. Loop devices, LVM, or even GPT inside GPT. Sometimes this happens naturally when handing partitions as block devices to virtual machines, they see the partition as virtual disk drive and partition that.
Just don't expect such partitions inside partitions to be picked up automatically.
As @fpmurphy1 pointed out in the comments, I was wrong: You can change the limit, using gdisk, expert menu, resize partition table. This can also be done for existing partition tables, provided there is unpartitioned space (a 512-byte sector for 4 additional partition entries) at the start and end of the drive. However I'm not sure how widely supported this is; there doesn't seem to be an option for it in parted or other partitioners I've tried.
And the highest limit you can set with gdisk seems to be 65536 but it's bugged:
Expert command (? for help): s
Current partition table size is 128.
Enter new size (4 up, default 128): 65536
Value out of range
And then...
Expert command (? for help): s
Current partition table size is 128.
Enter new size (4 up, default 128): 65535
Adjusting GPT size from 65535 to 65536 to fill the sector
Expert command (? for help): s
Current partition table size is 65536.
Eeeh? Whatever you say.
But try to save that partition table and gdisk is stuck in a loop for several minutes.
Expert command (? for help): w
--- gdisk gets stuck here ---
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
22253 root 20 0 24004 11932 3680 R 100.0 0.1 1:03.47 gdisk
--- unstuck several minutes later ---
Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
PARTITIONS!!
Do you want to proceed? (Y/N): Your option? (Y/N): y
OK; writing new GUID partition table (GPT) to /dev/loop0.
Warning: The kernel is still using the old partition table.
The new table will be used at the next reboot or after you
run partprobe(8) or kpartx(8)
The operation has completed successfully.
And here's what parted has to say about the successfully completed operation:
# parted /dev/loop0 print free
Backtrace has 8 calls on stack:
8: /usr/lib64/libparted.so.2(ped_assert+0x45) [0x7f7e780181f5]
7: /usr/lib64/libparted.so.2(+0x24d5e) [0x7f7e7802fd5e]
6: /usr/lib64/libparted.so.2(ped_disk_new+0x49) [0x7f7e7801d179]
5: parted() [0x40722e]
4: parted(non_interactive_mode+0x92) [0x40ccd2]
3: parted(main+0x1102) [0x405f52]
2: /lib64/libc.so.6(__libc_start_main+0xf1) [0x7f7e777ec1e1]
1: parted(_start+0x2a) [0x40610a]
You found a bug in GNU Parted! Here's what you have to do:
Don't panic! The bug has most likely not affected any of your data.
Help us to fix this bug by doing the following:
Check whether the bug has already been fixed by checking
the last version of GNU Parted that you can find at:
http://ftp.gnu.org/gnu/parted/
Please check this version prior to bug reporting.
If this has not been fixed yet or if you don't know how to check,
please visit the GNU Parted website:
http://www.gnu.org/software/parted
for further information.
Your report should contain the version of this release (3.2)
along with the error message below, the output of
parted DEVICE unit co print unit s print
and the following history of commands you entered.
Also include any additional information about your setup you
consider important.
Assertion (gpt_disk_data->entry_count <= 8192) at gpt.c:793 in function
_parse_header() failed.
Aborted
So parted refuses to work with GPT that has more than 8192 partition entries. Nobody ever does that, so it has to be corrupt, right?
This is what happens when you don't stick to defaults.
| Are there extended partitions in GPT partition table? |
1,501,411,620,000 |
I'm looking at replacing my current MBR-partitioned 2 TB system drive with quite possibly a 3 TB drive. Copying the files should not pose a problem, but are there any gotchas to watch out for, particularly with regards to the boot loader, keeping in mind that MBR doesn't support anything more than 2 TB so I'll have to move to GPT? Or is it sufficient to partition the new drive, copying all files, update /etc/fstab in its new place, physically replace the old system drive with the new and then re-running grub-install?
I'm using Linux with GRUB 2 (specifically 1.99-27+deb7u1 on Debian Wheezy) on a single-boot system (no second OS installed to take into consideration).
|
Grub2 supports GPT, so you'll have no problem booting from the new drive. Whether your BIOS can boot a GPT drive is a different matter. If you switch your BIOS from legacy mode to EFI mode, you'll need to install the grub-efi package.
You'll need to install the bootloader on the new drive. The easiest way is to copy the data to the new drive first, then chroot into it and run grub-install, passing it the new drive as a command line argument. If you have both drives at this point, you may need to edit /boot/grub/device.map.
There are several ways to copy the files. The nicest way is to set up mirroring between the two drives via mdraid (Linux software RAID) or LVM. This has the advantage that you can keep using the system while it's setting up the mirror; once it's done, install the bootloader, reboot, break the mirror, and if desired enlarge at least one filesystem to make use of the extra space. If your filesystems are on PC partitions, you can convert them to RAID1, but it's fiddly. You can take this opportunity to put your filesystems on LVM volumes over RAID1 volumes — it's simple and makes maintenance easier.
If a large proportion of a filesystem is occupied, it's faster to copy the filesystem wholesale than to copy the files. It's difficult to give a threshold because that depends not only on the amount of disk space that's in use but also on the distribution of file sizes. To copy a filesystem wholesale, you can use cat </dev/sdOLD1 >/dev/sdNEW1 where sdOLD is the old disk (e.g. sda) and sdNEW is the new disk (e.g. sdb). Don't do this while the filesystem is mounted.
If you copy all the files, make sure to preserve all the metadata, especially ownership and partitions. cp -ax /media/old-root /media/new-root works.
If you've rearranged the partitions, make sure to update /etc/fstab. You may need to update /etc/crypttab if you have encrypted volumes.
| Copying OS from one drive to another migrating from MBR to GPT - what to watch out for? |
1,501,411,620,000 |
We have bbb based custom board containing eMMC.
And we have created partitions as follows,
parted --script -a minimal /dev/mmcblk0 \
mklabel gpt \
mkpart primary 131072B 262143B \
mkpart primary 262144B 393215B \
mkpart primary 393216B 524287B \
mkpart primary 524288B 1572863B \
mkpart primary 1572864B 2621439B \
mkpart primary 2621440B 3145727B \
mkpart primary 3145728B 3276799B \
mkpart primary 3276800B 8519679B \
mkpart primary 8519680B 13762559B \
mkpart primary 13762560B 19005439B \
mkpart primary 19005440B 19267583B \
mkpart primary 19267584B 19529727B \
mkpart primary 19529728B 19791871B \
mkpart primary 20MiB 31MiB \
mkpart primary ext4 32MiB 232MiB \
mkpart primary ext4 232MiB 432MiB \
mkpart primary ext4 432MiB 532MiB \
mkpart primary ext4 532MiB 592MiB \
mkpart primary ext4 592MiB 792MiB \
mkpart primary ext4 792MiB 827MiB \
mkpart primary ext4 827MiB 3650MiB \
With above command we are able to partition eMMC but alignment is set to minimal. We wanted to check how can we achieve optimal alignment ?
Is there any advantage in achieving optimal alignment ?
I referred this link but solutions like parted -a opt /dev/sdb mkpart primary 0% 100% can not be used as we need following structure.
# parted --list
Model: MMC MMC04G (sd/mmc)
Disk /dev/mmcblk0: 3842MB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 131kB 262kB 131kB
2 262kB 393kB 131kB
3 393kB 524kB 131kB
4 524kB 1573kB 1049kB
5 1573kB 2621kB 1049kB
6 2621kB 3146kB 524kB
7 3146kB 3277kB 131kB
8 3277kB 8520kB 5243kB
9 8520kB 13.8MB 5243kB
10 13.8MB 19.0MB 5243kB
11 19.0MB 19.3MB 262kB
12 19.3MB 19.5MB 262kB
13 19.5MB 19.8MB 262kB
14 21.0MB 32.5MB 11.5MB
15 33.6MB 243MB 210MB ext4
16 243MB 453MB 210MB ext4
17 453MB 558MB 105MB ext4
18 558MB 621MB 62.9MB ext4
19 621MB 830MB 210MB ext4
20 830MB 867MB 36.7MB ext4
21 867MB 3827MB 2960MB ext4
With current command if we replace minimal with optimal we are seeing following messages from parted,
Warning: The resulting partition is not properly aligned for best performance.
Warning: The resulting partition is not properly aligned for best performance.
Warning: The resulting partition is not properly aligned for best performance.
Warning: The resulting partition is not properly aligned for best performance.
Warning: The resulting partition is not properly aligned for best performance.
Warning: The resulting partition is not properly aligned for best performance.
Warning: The resulting partition is not properly aligned for best performance.
Warning: The resulting partition is not properly aligned for best performance.
Warning: The resulting partition is not properly aligned for best performance.
Warning: The resulting partition is not properly aligned for best performance.
Warning: The resulting partition is not properly aligned for best performance.
Warning: The resulting partition is not properly aligned for best performance.
Any pointers/suggestions/correction for achieving optimal alignment ?
as suggested in the this link I extracted following values,
# cat /sys/class/mmc_host/mmc1/mmc1\:0001/block/mmcblk0/queue/optimal_io_size
0
# cat /sys/class/mmc_host/mmc1/mmc1\:0001/block/mmcblk0/queue/minimum_io_size
512
# cat /sys/class/mmc_host/mmc1/mmc1\:0001/block/mmcblk0/alignment_offset
0
# cat /sys/class/mmc_host/mmc1/mmc1\:0001/block/mmcblk0/queue/physical_block_siz
e
512
mmc utils give following output,
# ./mmc extcsd read /dev/mmcblk0
=============================================
Extended CSD rev 1.5 (MMC 4.41)
=============================================
Card Supported Command sets [S_CMD_SET: 0x01]
HPI Features [HPI_FEATURE: 0x03]: implementation based on CMD12
Background operations support [BKOPS_SUPPORT: 0x01]
Background operations status [BKOPS_STATUS: 0x02]
1st Initialisation Time after programmed sector [INI_TIMEOUT_AP: 0x7a]
Power class for 52MHz, DDR at 3.6V [PWR_CL_DDR_52_360: 0x00]
Power class for 52MHz, DDR at 1.95V [PWR_CL_DDR_52_195: 0x00]
Minimum Performance for 8bit at 52MHz in DDR mode:
[MIN_PERF_DDR_W_8_52: 0x00]
[MIN_PERF_DDR_R_8_52: 0x00]
TRIM Multiplier [TRIM_MULT: 0x06]
Secure Feature support [SEC_FEATURE_SUPPORT: 0x15]
Secure Erase Multiplier [SEC_ERASE_MULT: 0x02]
Secure TRIM Multiplier [SEC_TRIM_MULT: 0x03]
Boot Information [BOOT_INFO: 0x07]
Device supports alternative boot method
Device supports dual data rate during boot
Device supports high speed timing during boot
Boot partition size [BOOT_SIZE_MULTI: 0x10]
Access size [ACC_SIZE: 0x06]
High-capacity erase unit size [HC_ERASE_GRP_SIZE: 0x08]
i.e. 4096 KiB
High-capacity erase timeout [ERASE_TIMEOUT_MULT: 0x01]
Reliable write sector count [REL_WR_SEC_C: 0x01]
High-capacity W protect group size [HC_WP_GRP_SIZE: 0x01]
i.e. 4096 KiB
Sleep current (VCC) [S_C_VCC: 0x08]
Sleep current (VCCQ) [S_C_VCCQ: 0x08]
Sleep/awake timeout [S_A_TIMEOUT: 0x10]
Sector Count [SEC_COUNT: 0x00728000]
Device is block-addressed
Minimum Write Performance for 8bit:
[MIN_PERF_W_8_52: 0x08]
[MIN_PERF_R_8_52: 0x08]
[MIN_PERF_W_8_26_4_52: 0x08]
[MIN_PERF_R_8_26_4_52: 0x08]
Minimum Write Performance for 4bit:
[MIN_PERF_W_4_26: 0x08]
[MIN_PERF_R_4_26: 0x08]
Power classes registers:
[PWR_CL_26_360: 0x00]
[PWR_CL_52_360: 0x00]
[PWR_CL_26_195: 0x00]
[PWR_CL_52_195: 0x00]
Partition switching timing [PARTITION_SWITCH_TIME: 0x01]
Out-of-interrupt busy timing [OUT_OF_INTERRUPT_TIME: 0x02]
Card Type [CARD_TYPE: 0x07]
HS Dual Data Rate eMMC @52MHz 1.8V or 3VI/O
HS eMMC @52MHz - at rated device voltage(s)
HS eMMC @26MHz - at rated device voltage(s)
CSD structure version [CSD_STRUCTURE: 0x02]
Command set [CMD_SET: 0x00]
Command set revision [CMD_SET_REV: 0x00]
Power class [POWER_CLASS: 0x00]
High-speed interface timing [HS_TIMING: 0x01]
Erased memory content [ERASED_MEM_CONT: 0x00]
Boot configuration bytes [PARTITION_CONFIG: 0x00]
Not boot enable
No access to boot partition
Boot config protection [BOOT_CONFIG_PROT: 0x00]
Boot bus Conditions [BOOT_BUS_CONDITIONS: 0x00]
High-density erase group definition [ERASE_GROUP_DEF: 0x01]
Boot write protection status registers [BOOT_WP_STATUS]: 0x00
Boot Area Write protection [BOOT_WP]: 0x00
Power ro locking: possible
Permanent ro locking: possible
ro lock status: not locked
User area write protection register [USER_WP]: 0x00
FW configuration [FW_CONFIG]: 0x00
RPMB Size [RPMB_SIZE_MULT]: 0x01
Write reliability setting register [WR_REL_SET]: 0x00
user area: existing data is at risk if a power failure occurs during a write operation
partition 1: existing data is at risk if a power failure occurs during a write operation
partition 2: existing data is at risk if a power failure occurs during a write operation
partition 3: existing data is at risk if a power failure occurs during a write operation
partition 4: existing data is at risk if a power failure occurs during a write operation
Write reliability parameter register [WR_REL_PARAM]: 0x05
Device supports writing EXT_CSD_WR_REL_SET
Device supports the enhanced def. of reliable write
Enable background operations handshake [BKOPS_EN]: 0x00
H/W reset function [RST_N_FUNCTION]: 0x00
HPI management [HPI_MGMT]: 0x01
Partitioning Support [PARTITIONING_SUPPORT]: 0x03
Device support partitioning feature
Device can have enhanced tech.
Max Enhanced Area Size [MAX_ENH_SIZE_MULT]: 0x0001ca
i.e. 1875968 KiB
Partitions attribute [PARTITIONS_ATTRIBUTE]: 0x00
Partitioning Setting [PARTITION_SETTING_COMPLETED]: 0x00
Device partition setting NOT complete
General Purpose Partition Size
[GP_SIZE_MULT_4]: 0x000000
[GP_SIZE_MULT_3]: 0x000000
[GP_SIZE_MULT_2]: 0x000000
[GP_SIZE_MULT_1]: 0x000000
Enhanced User Data Area Size [ENH_SIZE_MULT]: 0x000000
i.e. 0 KiB
Enhanced User Data Start Address [ENH_START_ADDR]: 0x000000
i.e. 0 bytes offset
Bad Block Management mode [SEC_BAD_BLK_MGMNT]: 0x00
EDIT: Achieved optimal alignment as suggested by "Глеб Майоров", Except first 3 partition(which I can't change) other small partitions are changed to minimum of 1MiB size. And except for 3 partitions, all other partitions seems to be aligned. Here is the latest partitioning script I used for eMMC.(Please note: In below script I have changed partition sizes w.r.t above script)
parted --script -a optimal /dev/mmcblk0 \
mklabel gpt \
mkpart primary 128KiB 255KiB \
mkpart primary 256KiB 383KiB \
mkpart primary 384KiB 511KiB \
mkpart primary 1MiB 2MiB \
mkpart primary 2MiB 3MiB \
mkpart primary 3MiB 4MiB \
mkpart primary 4MiB 5MiB \
mkpart primary 5MiB 10MiB \
mkpart primary 10MiB 15MiB \
mkpart primary 15MiB 20MiB \
mkpart primary 20MiB 21MiB \
mkpart primary 21MiB 22MiB \
mkpart primary 22MiB 23MiB \
mkpart primary 23MiB 30MiB \
mkpart primary ext4 30MiB 530MiB \
mkpart primary ext4 530MiB 1030MiB \
mkpart primary ext4 1030MiB 1130MiB \
mkpart primary ext4 1130MiB 1190MiB \
mkpart primary ext4 1190MiB 1720MiB \
mkpart primary ext4 1720MiB 1755MiB \
mkpart primary ext4 1755MiB 100%
|
Try to align to eMMC erasure block size. It usually equals 0.5, 1, 2, 4, 8 MiB depending on eMMC datasheet. If you find block size alignment too much memory wasting, then stick to the page size, generally found in the range of 4..16 KiB.
Try to make partition sizes and borders a multiple of erasure block size, so when file system writes to the first or last FS block, memory card doesn't have to erase and rewrite beginning/end of next/previous partition.
Don't rely on parted's features to align, just take a calculator, pen and a sheet of paper, and count up a correct borders in sectors or bytes.
Personally, I prefer aligning to 8 MiB border, because it isn't waste too much memory, and any partition starts and ends on erasure block border regardless of specific erasure block size, so I don't need to search for memory card documentation.
Optimal alignment reduces write amplification factor, so your memory could last longer.
| how to achieve optimal alignment for emmc partition? |
1,501,411,620,000 |
I have cloned a 1GB pen drive to an 8GB one using dd.
But the size of the GPT is still 1GB. For example the secondary (backup) GPT is still located at 1GB (it has to be moved to the end of the disk).
Also I think two fields inside the main GPT (at offset 32 and 48) have to be updated.
I've looked into gdisk but couldn't find anything.
|
Example using gdisk:
# gdisk /dev/yourdisk
Command (? for help): v
Problem: The secondary header's self-pointer indicates that it doesn't reside
at the end of the disk. If you've added a disk to a RAID array, use the 'e'
option on the experts' menu to adjust the secondary header's and partition
table's locations.
Identified 1 problems!
gdisk can be a bit cryptic to use but here it tells you directly what to do to solve this problem through the experts menu (x, e).
Command (? for help): x
Expert command (? for help): ?
e relocate backup data structures to the end of the disk
Expert command (? for help): e
Relocating backup data structures to the end of the disk
At this point you can adjust partitions or just write it out as is:
Expert command (? for help): v
No problems found. 15624933 free sectors (7.5 GiB) available in 1
segments, the largest of which is 15624933 (7.5 GiB) in size.
Expert command (? for help): w
Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
PARTITIONS!!
Do you want to proceed? (Y/N): Y
OK; writing new GUID partition table (GPT) to /dev/yourdisk.
parted can also be used, it will ask you to fix it when you use any command:
# parted /dev/yourdisk print
Warning: Not all of the space available to /dev/yourdisk appears to be used,
you can fix the GPT to use all of the space (an extra 13671875 blocks)
or continue with the current setting?
Fix/Ignore?
fdisk will resize GPT by writing.
# fdisk /dev/yourdisk
Welcome to fdisk (util-linux 2.39.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
GPT PMBR size mismatch (1953124 != 15624999) will be corrected by write.
The backup GPT table is not on the end of the device. This problem will be corrected by write.
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
| How to resize the GPT partition table itself on Linux? |
1,501,411,620,000 |
All I search for this says, makes me understand that GPT is related to UEFI, but is it possible to install using GPT disk format, in a 32 bit system using bios (not legacy mode)?
I tried installing Arch in a VM simulating 32bit and using a partition like:
-BBP
/boot
/
/home
swap
and it did not work.
Is it possible? If it is, is that the correct partitioning using BBP?
Maybe it's cause I tried encrypting the / and /home without using LVM following the Arch guide, but i'm not sure.
|
You can do it.
GPT and (U)EFI are not related concepts, although it is only custom that (U)EFIs use GPT partition tables, or at least they are compatible with them.
The BIOS (typically) can't see partitions, and the partition tables only rarely affect it. The only what the BIOS knows if that it has to read the first sector of the MBR (=the first sector of the disk) and then execute the code fragment in it.
Some BIOSes do also additional checks (checksum validities, the existence of the bootable flag, partition limits in the partition table) as so on, some not. However, GPT is backward compatible to the traditional partitioning, so it won't be a problem.
More exactly, a GPT partition table not being compatible with a traditional one, is invalid.
The only what you have to do is to somehow "help" your installer to create a GPT layout (what may be problematic).
| Can I install using GPT on 32 bit system with bios? |
1,501,411,620,000 |
I am frequently testing bootable USB devices with different operating systems. Now I have to boot the whole computer just to test one USB device. How can I test the devices without booting? QEMU works sometimes, but not with UEFI GPT devices. Command sudo qemu-system-x86_64 /dev/sde1 just hangs with "Booting from Hard Disk ...".
|
sudo apt-get install ovmf qemu-efi qemu
qemu-system-x86_64 --bios /usr/share/qemu/OVMF.fd -m 4096 -enable-kvm -cdrom debian-9.2.1-amd64-DVD-1.iso -display sdl -vga virtio
You can specify also an HD (virtual or physical)
| How do I test bootable USB created with UEFI GPT partition scheme |
1,501,411,620,000 |
Using the GUID Partition Table and RAID 1, the bootloader (syslinux or GRUB) is not able to boot into the machine, which was installed with Arch Linux.
First off, there are two drives identical drives setup to use software RAID level 1.
The two drives are partitioned as follows:
sd[ab]1 as md2
sb[ab]2 as md1
sb[ab]3 as md0
Then the md0 is set to VolGroupArray and then split using LVM, one as / (root) and the other as /home, md1 is set as the SWAP, and md2 is set as /boot.
Running the following command grub-install --target=i386-pc --recheck --debug /dev/md2 the subsequent error occurs.
/usr/bin/grub-bios-setup warning: Embedding is not possible. GRUB can only be installed in this setup by using blocklists. However, blocklists are UNRELIABLE and their use is discouraged..
/usr/bin/grub-bios-setup error: will not proceed with blocklists
My questions:
Should I not install GRUB on a RAID partition?
What type of filesystem should the boot partition be?
What are blocklists?
What is good/bad about the partition scheme listed above?
Is it feasible to create a single partition on sda and sdb (/dev/sda1 and /dev/sdb1) mirror to that with RAID1 (/dev/md0) and then install LVM on md0, then create a boot, home, root, and swap partitions on the LVM? With this schema can GRUB or syslinux boot a RAID LVM boot partition? Why or why not?
|
The PC only boots from an individual disk, so that is where you must install grub. Note that you can install it on each of the disks individually in case one fails, then the other can be used. Grub2 also does not require a dedicated /boot partition; it can boot from lvm on draid directly.
| bootloader configuration with GPT, RAID1, and LVM |
1,501,411,620,000 |
I used dd to clone a smaller disk onto a larger disk, however now when booting I'm getting dmesg errors of:
[Fri Sep 30 11:48:43 2022] GPT:Primary header thinks Alt. header is not at the end of the disk.
[Fri Sep 30 11:48:43 2022] GPT:1953525167 != 3907029167
[Fri Sep 30 11:48:43 2022] GPT:Alternate GPT header not at the end of the disk.
[Fri Sep 30 11:48:43 2022] GPT:1953525167 != 3907029167
[Fri Sep 30 11:48:43 2022] GPT: Use GNU Parted to correct GPT errors.
How can I resolve this? The error indicates to use parted, but I'm unsure as to what commands to run?
|
You don't need to do anything special, just use p to print information about the disk, parted will tell you the partition table is wrong and ask you what to do so simply tell it to Fix it:
# parted /dev/loop0
GNU Parted 3.5
Using /dev/loop0
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Warning: Not all of the space available to /dev/loop0 appears to be used, you can fix the GPT to use all of the space (an extra 10485760 blocks) or continue with the current setting?
Fix/Ignore? Fix
...
(of course replace /dev/loop0 with your disk, e.g. /dev/sda).
| Fix GPT after using dd to clone a smaller disk onto a larger disk |
1,501,411,620,000 |
I have a hard disk that I use for backups via a USB 2.0 docking station. The disk has a GPT and one single ext4 partition. Everything is fine via the docking station, but if I attach the disk to an internal SATA port, or put it in a swap bay in my PC, the GPT is not there any more.
Here's what I get when the disk is in the docking bay, an everything works:
$ sudo fdisk -l /dev/sdg
Disk /dev/sdg: 1.8 TiB, 2000398934016 bytes, 488378646 sectors
Disk model: 001-1CH164
Units: sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 2C0A0696-2318-4BBD-9329-0115AB5AC313
Device Start End Sectors Size Type
/dev/sdg1 512 488378367 488377856 1.8T Linux filesystem
$ sudo parted /dev/sdg print
Model: ST2000DM 001-1CH164 (scsi)
Disk /dev/sdg: 2000GB
Sector size (logical/physical): 4096B/4096B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 2097kB 2000GB 2000GB ext4 MUSICBUP
Here's the output of the same commands when the disk is in the internal swap bay, or any other internal SATA port:
$ sudo fdisk -l /dev/sdg
GPT PMBR size mismatch (488378645 != 3907029167) will be corrected by write.
Disk /dev/sdg: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: ST2000DM001-1CH1
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x00000000
Device Boot Start End Sectors Size Id Type
/dev/sdg1 1 3907029167 3907029167 1.8T ee GPT
Partition 1 does not start on physical sector boundary.
$ sudo parted /dev/sdg print
Error: /dev/sdg: unrecognised disk label
Model: ATA ST2000DM001-1CH1 (scsi)
Disk /dev/sdg: 2000GB
Sector size (logical/physical): 512B/4096B
Partition Table: unknown
Disk Flags:
Versions: fdisk from util-linux 2.33.1, and parted (GNU parted) 3.2
OS: Debian 10 Buster 4.19.0-14-amd64 #1 SMP Debian 4.19.171-2 (2021-01-30) x86_64 GNU/Linux
smartctl info (same in both cases):
=== START OF INFORMATION SECTION ===
Model Family: Seagate Barracuda 7200.14 (AF)
Device Model: ST2000DM001-1CH164
Serial Number: Z1E6Q80D
LU WWN Device Id: 5 000c50 065bb1ceb
Firmware Version: CC27
User Capacity: 2,000,398,934,016 bytes [2.00 TB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Rotation Rate: 7200 rpm
Form Factor: 3.5 inches
Device is: In smartctl database [for details use: -P show]
ATA Version is: ACS-2, ACS-3 T13/2161-D revision 3b
SATA Version is: SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is: Sun Mar 28 13:00:17 2021 CEST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
What I tried already: moving the partition to the right, with gparted, in order to make sure that it is aligned to MiB. That did not help, and actually parted was already telling me that the partition was optimally aligned (when in docking).
One thing I noticed is that logical sector size differs when in the external docking (4096) and internally (512).
Of course I copied the data elsewhere and I could just reformat it while it's attached to the PC, but I'd like to learn a bit from this and see if there's a way to correct the current GPT. Any ideas?
|
Unfortunately, GPT still depends on the logical sector size, and in your case it differs:
Sector size (logical/physical): 4096 bytes / 4096 bytes
Sector size (logical/physical): 4096B/4096B
vs.
Sector size (logical/physical): 512 bytes / 4096 bytes
Sector size (logical/physical): 512B/4096B
The difference usually happens because some controllers / USB bridges emulate the wrong sector size.
You can work around it by using losetup with --sector-size parameter:
losetup --find --show --partscan --sector-size 4096 /dev/sdg
Then check for /dev/loopXpY devices.
If you get another USB enclosure that does not force 4K logical sector size, you'll have to re-create the partition table for 512 byte sectors. It's not possible to create one partition table that works for both sector sizes - you could do it with LVM but LVM is not a partition table format.
| GPT disk looks different in external docking bay and internal swap bay |
1,501,411,620,000 |
I want to disable swap on several running ubuntu 16.04 servers. I'd like, if possible, not to reboot them. From my research, it seemed that
running swapoff -a to disable swap until the next reboot
and commenting the swap line in /etc/fstab to persist after the next reboot
should do the job. However, it seems that the kernel is re-enabling the swap: a varying amount of time after the swapoff, I see something like that in the /var/log/kern.log log:
Nov 28 12:00:51 srv07 kernel: [ 8049.183480] Adding 62498812k swap on /dev/sda3. Priority:-1 extents:1 across:62498812k FS
Once I had it happen 4 hours after the swapoff, another time 5 minutes.
What's causing this?
This is on Ubuntu 16.04 server, kernel version 4.4.0.
|
The disks were using GPT, and this was due to GPT partition automounting:
On a GPT partitioned disk systemd-gpt-auto-generator(8) will mount partitions following the Discoverable Partitions Specification, thus they can be omitted from fstab.
Another page of the same documentation explains how to disable this:
Start gdisk, e.g.:
$ gdisk /dev/sda
Press p to print the partition table and take note of the partition
number(s) of the for which you want to disable automounting.
Press x extra functionality (experts only).
Press a set attributes. Input the partition number and set the
attribute 63. Under Set fields are: it should now show 63 (do not
automount). Press Enter to end attribute changing. Repeat this for all
partitions you want to prevent from automounting.
When done write the table to disk and exit via the w command.
Alternatively using sgdisk, the attribute can be set using the
-A/--attributes= option; see sgdisk(8) for usage. For example, to set partition attribute 63 "do not automount" on /dev/sda2 run:
$ sgdisk -A 2:set:63 /dev/sda
| Can't disable swap on a GPT-based system |
1,501,411,620,000 |
I am in doubt whether I have partitioned my hdd correctly as GPT on a BIOS motherboard. I used gparted to partition and I don't know if I aligned the beginning/end of the disk correctly, used correct flags etc. The disk in question is sdc:
$ sudo lsblk -f
NAME FSTYPE LABEL MOUNTPOINT
sda
├─sda1 ntfs System Reserved
├─sda2 ntfs win7
└─sda3 ntfs WINYANCI
sdb
├─sdb1
└─sdb5 ext4 YAHSI
sdc
├─sdc1
├─sdc2 swap [SWAP]
├─sdc3 ext4 /
├─sdc4 ext4 /home
├─sdc5 ext4 store1
└─sdc6 ntfs store2
sdd
├─sdd1
├─sdd2 ntfs DEPO
└─sdd5 ntfs HUSUSI
sr0
here is what gdisk shows:
$ sudo gdisk /dev/sdc
GPT fdisk (gdisk) version 0.8.8
Partition table scan:
MBR: protective
BSD: not present
APM: not present
GPT: present
Found valid GPT with protective MBR; using GPT.
Command (? for help): p
Disk /dev/sdc: 1953525168 sectors, 931.5 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): 2758BB06-C7E7-451B-9C92-F1B278721BB6
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 1953525134
Partitions will be aligned on 2048-sector boundaries
Total free space is 3437 sectors (1.7 MiB)
Number Start (sector) End (sector) Size Code Name
1 2048 6143 2.0 MiB EF02
2 6144 8394751 4.0 GiB 8200
3 8394752 76754943 32.6 GiB EF00
4 76754944 174409727 46.6 GiB 0700
5 174409728 1346283519 558.8 GiB 0700
6 1346283520 1953523711 289.6 GiB 0700
and parted shows this.
Are there any mistakes?
|
To check if all partitions are properly aligned to 1MiB you should divide the start sector by 8. Let's see:
1346283520/8=168285440
174409728/8=21801216
76754944/8=9594368
8394752/8=1049344
6144/8=768
2048/8=256
It looks good. The next thing is to check if the size of the partitions can also be divided by 8:
(1953523711−1346283520+1)/8=75905024
(1346283519−174409728+1)/8=146484224
(174409727−76754944+1)/8=12206848
(76754943−8394752+1)/8=8545024
(8394751−6144+1)/8=1048576
(6143−2048+1)/8=512
It also looks good. You divide by 8 because of the technology called "advanced format". Just look at the following images:
But this concerns only disks that have this feature. If you don't have a disk with the "advanced format" technology, it doesn't matter what alignment you use. Most of modern disks use this thing, and most of partition tools align partitions to 1MiB by default.
| Partitioning correctly for GPT in BIOS system |
1,501,411,620,000 |
I'm trying to set up a dual UEFI boot Windows/Arch Linux, and have already installed Windows on a GPT layout through UEFI boot. I now want to fresh install Arch Linux using GPT layout as well, and was wondering how I could do that.
More specifically, do I need to modify the content of the core install image to be able to boot from it in UEFI?
If I manage to boot in UEFI from the core install image, will it automatically set up my partitions using GPT layout?
I've read a couples of tutorial on how to set up UEFI boot with Arch, but it seems most of them considered only a situation were Arch Linux was already installed (using MBR layout?)
Thank you
|
I was able to boot Arch from UEFI by using an Archboot image, and then install it on the GPT drive. Then I had to install grub2, which I installed on the same partition as the Microsoft EFI partition, and chainloaded Windows 7 bootloader from it. Thanks!
| Installing Arch Linux with UEFI Boot and GPT Layout |
1,501,411,620,000 |
I'm trying to migrate my home server from FreeNAS 8.3 to DragonFly BSD. In order to shuffle my files about I picked up a Seagate 8Tb Archive disk, attached it via eSATA, formatted it as UFS under FreeNAS then patiently waited about a week for it to trickle full.
Now I've got DragonFly going, but try as I might I can't get the UFS volume mounted. Is there some way to get this thing mounted under DragonFly?
I can see that the drive is using GPT (and a protective MBR) and is definitely UFS. Is there something incompatible between the two systems, despite their FreeBSD heritage? It also seems odd that I can see slices but not partitions. I expected ls /dev/ad6* to give me something like /dev/ad6p1a since the drive is using GPT, but evidently not.
I'm yet to try anything invasive (as in, write to the disk) because I'm completely in the dark on what the cause is.
% uname -a
DragonFly loki.misque.me 4.4-RELEASE DragonFly v4.4.3-RELEASE #5: Mon Apr 18 22:47:32 EDT 2016 [email protected]:/usr/obj/home/justin/release/4_4/sys/X86_64_GENERIC x86_64
Some basic information about the disk:
% ls /dev/ad6*
/dev/ad6 /dev/ad6s0 /dev/ad6s1
% cat /etc/fstab
# Device Mountpoint FStype Options Dump Pass#
/dev/serno/4C530012740115112064.s1a / ufs rw 1 1
/dev/serno/4C530012740115112064.s1d /home ufs rw 2 2
/dev/serno/4C530012740115112064.s1e /tmp ufs rw 2 2
/dev/serno/4C530012740115112064.s1f /usr ufs rw 2 2
/dev/serno/4C530012740115112064.s1g /var ufs rw 2 2
/dev/serno/4C530012740115112064.s1b none swap sw 0 0
proc /proc procfs rw 0 0
/dev/ad6s1 /mnt/backup ufs ro 0 0
The mount effort in question:
% sudo mount -v /mnt/backup
mount_ufs: /dev/ad6s1 on /mnt/backup: incorrect super block
And my diagnostic efforts:
% sudo fdisk /dev/ad6
******* Working on device /dev/ad6 *******
parameters extracted from device are:
cylinders=15504021 heads=16 sectors/track=63 (1008 blks/cyl)
Figures below won't work with BIOS for partitions not in cyl 1
parameters to be used for BIOS calculations are:
cylinders=15504021 heads=16 sectors/track=63 (1008 blks/cyl)
Media sector size is 512
Warning: BIOS sector numbering starts with sector 1
Information from DOS bootblock is:
The data for partition 1 is:
sysid 238,(EFI GPT)
start 1, size 4294967295 (2097151 Meg), flag 80 (active)
beg: cyl 0/ head 0/ sector 2;
end: cyl 1023/ head 255/ sector 63
The data for partition 2 is:
<UNUSED>
The data for partition 3 is:
<UNUSED>
The data for partition 4 is:
<UNUSED>
% sudo disklabel64 -r ad6
disklabel64: bad pack magic number
% sudo disklabel64 -r ad6s0
disklabel64: bad pack magic number
% sudo disklabel64 -r ad6s1
disklabel64: bad pack magic number
% sudo camcontrol devlist
<ATA WDC WD20EARX-00P AB51> at scbus3 target 1 lun 0 (da0,sg0,pass0)
<ATA WDC WD30EFRX-68E 0A80> at scbus3 target 2 lun 0 (da1,sg1,pass1)
<ATA OCZ-AGILITY 1.4> at scbus3 target 3 lun 0 (da2,sg2,pass2)
<ATA WDC WD30EFRX-68A 0A80> at scbus3 target 4 lun 0 (da3,sg3,pass3)
<ATA WDC WD20EARS-00M AB51> at scbus3 target 5 lun 0 (da4,sg4,pass4)
<ATA WDC WD20EFRX-68E 0A82> at scbus3 target 6 lun 0 (da5,sg5,pass5)
<ATA WDC WD20EARS-00M AB51> at scbus3 target 7 lun 0 (da6,sg6,pass6)
<SanDisk Cruzer Fit 1.27> at scbus6 target 0 lun 0 (pass8,sg8,da8)
% sudo gpt show /dev/ad6
start size index contents
0 1 - PMBR
1 1 - Pri GPT header
2 32 - Pri GPT table
34 94 -
128 4194304 0 GPT part - FreeBSD Swap
4194432 15623858696 1 GPT part - FreeBSD UFS/UFS2
15628053128 7 -
15628053135 32 - Sec GPT table
15628053167 1 - Sec GPT header
% sudo file -s /dev/ad6
/dev/ad6: DOS/MBR boot sector; partition 1 : ID=0xee, active, start-CHS (0x0,0,2), end-CHS (0x3ff,255,63), startsector 1, 4294967295 sectors
% sudo file -s /dev/ad6s1
/dev/ad6s1: Unix Fast File system [v2] (little-endian) last written at Thu Jan 1 00:00:00 1970, number of blocks 0, number of data blocks 0, pending blocks to free 0, system-wide uuid 0,
|
You chose a rather convoluted migration.
FreeBSD, and therefore FreeNAS, uses UFS2 while DragonFly uses the older UFS1. Both have softupdates but UFS2 has a different format as it supports some other features like more timestamps, extended attributes, faster fsck and SUJ.
| Migrate UFS drive from FreeNAS to DragonFly BSD |
1,501,411,620,000 |
I've tried installing OpenSUSE 13.2, Debian 8/8.1 and Ubuntu 15.04 (all them amd64). Debian/Ubuntu won't show disks and OpenSUSE can't format the partitions created on them.
During the install, OpenSUSE detects disks and even allow me to delete old partitions, create a new partition table,and to create new partitions. But won't format the new partitions raising the error "can't mount /dev/sda1: device or resource is busy". When debian install didn't detect the HDDs, I tried to mount them by myself and received the same error.
During Debian / Ubuntu Install, I choose "manual" partitioning option, but it won't show my hard drives.
Everywhere else in the system where I check for the HDDs, they are detected correctly: fdisk -l, cfdisk -l, lsblk, /sys/block/, parted, dmesg. All commands shows both my /dev/sda and /dev/sdb and their partitions correctly.
The hardware is a hybrid UEFI capable Ultrabook (Dell Inspiron 14z 5423) which has both a SSD and a SATA HDD.
Things I've tried so far:
- Used fixparts (windows) trying to find GPT stray. Nothing found.
- Used AOMEI Partition Assistent Pro (windows) to fix MBR, but no luck detecting partitions during debian install. So I went back to AOMEI and also resized both disks partition and added a EXT3 partition to each disk.
- Created a new partition table in UEFI mode (which converted the disk from mbr to gpt)
- Changed BIOS settings to all possible combinations: SATA Type both ATA and AHCI, Boot Type both UEFI and Legacy, UEFI with both Secure Boot On and Off, UEFI looking for Option ROM On and Off...
any suggestions will be much apprecited!
|
I finally figured it out myself.
Solution:
First I booted OpenSUSE from USBKEY in UEFI mode.
In the intaller partitioner, I removed all partitions in the SSD and HDD
Then I created a new partition table for each disk, still using the partitioner.
Booted up from Ubuntu 15.04 USBKEY installer and it finally could manage partitions and install the system properly.
Although it sounds simple, it took some time and required two operating systems to solve the issue.
Since I booted in UEFI mode, when I created a new partition table, I believe it converted the disks to GPT format, and Ubuntu could finally "detect" and manage the disks partitions.
It remains a mystery why it didn't work in MBR/Legacy mode at all, even after creating a new partition table, and why OpenSUSE couldn't format/mount the partitions it created.
Finally I got linux on it.
| Can't format HDDs and install linux to Dell hybrid ultrabook |
1,501,411,620,000 |
Why the error getting when excuting the fdisk -l command in linux.
# fdisk -l /dev/sdb
WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util fdisk doesn't support GPT. Use GNU Parted.
|
The real limitation is that the fdisk tool in the util-linux package doesn't support GPT-type partition tables, which you can find on any disk. However, they're commonly found on disks greater than 2 GiB, because the old MBR-type partition tables don't support sizes that large.
The easiest fix is, as the error suggested, to just use the GNU Parted software instead. If you'd like to still have the old fdisk style interface, the gnu-fdisk package provides GNU Fdisk, an alternate version of fdisk that does support GPT.
| Gnu Parted Error in HardDisk of Linux? |
1,501,411,620,000 |
As seen below in output of parted, Windows 8/8.1 seems to use flags in GUID partition entries:
I guess that for example partition with hidden flag is not shown in Windows Explorer. However, does Linux also use GPT flags?
|
Does it use GPT flags? Define use :)
The only "important" flag to Linux is the boot flag, and that's not directly handled by Linux per se: your system's firmware at boot time will select the filesystem with this flag and search therein for an EFI bootloader, boot that, and then the bootloader will use the filesystem in which it lives to configure itself and do all that it needs.
Linux probably wouldn't modify filesystem flags without your knowledge or consent, and I can't think of a way that it would rely on them as they're not too useful to a booted OS. If you were to also install a BIOS-based bootloader on a GPT volume, there's a flag for that and Linux might use that to determine where it should, say, update that BIOS bootloader.
Typically, you're on your own when it comes to filesystem layout and Linux won't make decisions for you. If you want your EFI volume mounted, you need to shove a line into /etc/fstab. If you have a separate /boot partition, you're going to need to shove that into /etc/fstab too. The only thing I can think of which might look at flags would be bootloader code/installation code, so possibly GRUB or rEFInd. (rEFInd probably just uses EFI variables, though, which are something else)
To answer your question about Windows, it maintains a "secret" partition and uses these flags to hide the partition from the operating system. It's still there and you can access it if you want, but it's made to be difficult. Linux couldn't care less if you call it secret or not, and if you plug in a hard drive with one of these partitions installed, it will show up and be displayed and may be automounted if your machine is configured for that.
| Does Linux use GPT flags? |
1,501,411,620,000 |
What is the difference between the GPT and the BIOS disc partitioning systems?
when
boot drive is below 2TB
on a BIOS system or UEFI boot disabled
grub on BIOS system with GPT partitioned needs an extra 1MB partition, I think this is somewhat messy.
|
You have four primary partitions and want to add a fifth... and you can't just redeclare them extended/logical because those need an extra sector for each partition.
Also GPT has a backup at the end of the disk so if you ever lost a partition table to MSDOS and had to resort to TestDisk, with GPT you might be able to do without.
grub on BIOS system with GPT partitioned needs an extra 1MB partition
It's closer to 64KB actually, at least it was around that when I last checked what was actually written to that partition. And with msdos partitions as well, grub has to put its core somewhere. The only difference is that with GPT the grub developers thought it'd be nice to make it official-like by having a dedicated partition type for it.
| Advantage of GPT over MBR partition table [closed] |
1,501,411,620,000 |
How to boot GPT based system to Linux and Windows? This is not a question of starting from a fresh GPT based system, but starting from a MBR converted to GPT based system.
My Asus laptop initial setup,
I disabled the Secure Boot Control, and
I enabled [Launch CSM] (Compatibility Support Module)
I partitioned my HD using MBR
All my systems on my Asus laptop were boot from such BIOS/MBR/CSM mode, including Win8 and all my Linux
However, I found that my USB is booted only in EFI style, and Windows 10 is refusing to be installed to my BIOS/MBR/CSM mode system when my USB is booted in EFI style.
So I converted my MBR disk to GPT, and of course, as Krunal warned, such practice ruined my system boot, and I need make everything bootable again.
Alright, so now is my question.
In BIOS/MBR/CSM mode, I have an active MBR partition, all my systems were boot from it (using extlinux), including Win8 and all my Linux.
In GPT mode, however, this is where the problem begins for me.
To mark a GPT Partition bootable under Linux, I saw I need to set the "boot flag" in GParted, which I did (marked my previous active MBR partition as type code of EF00 which stands for "EFI System"), and GParted consequently set ESP (EFI System Partition) flag as well.
However, according to GUID Partition Table (GPT) specific instructions from wiki.archlinux, I need a type code ef02 flag bios_grub in order to boot. But I don't have any spare room for such partition.
So I basically don't know which way to go, and don't want to further mess up with my already-messed-up and unbootable system.
My current partition schema (I didn't think it is relevant but since somebody asked for it):
Disk /dev/sda: 698.65 GiB, 750156374016 bytes, 1465149168 sectors
Disk model: HGST HTS541075A9
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: AA9AB709-8A5D-468D-990E-155BA6A2FBB3
Device Start End Sectors Size Type
/dev/sda1 2048 129026047 129024000 61.5G Microsoft basic data
/dev/sda2 129026048 169986047 40960000 19.5G EFI System
/dev/sda3 169988096 186372095 16384000 7.8G Linux filesystem
/dev/sda4 186374144 200710143 14336000 6.9G Linux filesystem
/dev/sda5 200712192 215046143 14333952 6.9G Linux filesystem
/dev/sda6 215048192 231432191 16384000 7.8G Linux filesystem
/dev/sda7 231434240 247818239 16384000 7.8G Linux filesystem
/dev/sda8 247820288 264204287 16384000 7.8G Linux filesystem
/dev/sda9 264206336 276494335 12288000 5.9G Linux filesystem
/dev/sda10 276496384 288784383 12288000 5.9G Linux filesystem
/dev/sda11 288786432 329746431 40960000 19.5G Linux filesystem
/dev/sda12 329748480 452628479 122880000 58.6G Microsoft basic data
/dev/sda13 452630528 493590527 40960000 19.5G Linux swap
/dev/sda14 493592576 903192575 409600000 195.3G Linux filesystem
/dev/sda15 903194624 1465147391 561952768 268G Linux filesystem
So clear instructions on how to boot Linux (and Windows) from such environment is really appreciated. Thanks.
UPDATE/Conclusion:
/dev/sda2 was my active MBR partition before, and as explained above, I changed its type from Linux filesystem to EFI System. But it is in ext4 format so it cannot be used as an EFI System.
So, all I need to fix my above problems are:
revert sda2's type back to Linux filesystem,
split out a FAT32 partition as the ESP partition from my sda13 19.5G Linux swap,
change the firmware from CSM to native EFI mode
then follow advice below from @telcoM
|
As far as I know, Windows does not handle such a converted system disk as a special case once the conversion is done, so it should be treated exactly the same as a disk in a "fresh" GPT-based system.
In particular, Windows imposes the limitation that GPT-partitioned system disks must always boot Windows in UEFI native style: that is, using BIOS-style boot process on GPT-partitioned disks is not allowed.
First, a primer on differences between MBR and GPT partitioning:
In GPT, there is no division to primary/extended/logical partitions like MBR partitioning has. All GPT partitions are just partitions.
Although there is a "legacy BIOS bootable" partition attribute bit in GPT partition table, it is not used at all when booting in UEFI style.
MBR-partitioned disk normally has a gap of unused disk blocks between the block #0 (the actual Master Boot Record) and the beginning of the first partition. On modern systems, the first partition is usually aligned exactly 1 MiB from the beginning of the disk, so if the common 512-byte blocks are used, the first partition will begin at block #2048. The gap between MBR and the first partition is used by bootloaders like GRUB. On GPT-partitioned disks, this area is occupied by the actual GPT partition table and cannot be used.
On MBR-partitioned disks, partition type is identified by a single byte. On GPT-partitioned disks, the type of each partition is identified by an UUID.
MBR-partitioned disks have a 32-bit disk signature; GPT-partitioned disks have a 128-bit UUID for the same purpose. Each GPT partition also has an unique UUID in the partition table: it can be used to uniquely identify a partition even if the filesystem used in it is unknown. Linux displays this as a PARTUUID; for MBR-partitioned disks, a combination of the MBR disk signature and partition number is used in lieu of a real partition UUID.
The MBR partition table exists in block #0; if extended partitions are used, the beginning of each logical partition has an add-on partition table. The GPT partition table starts in block #1 and occupies multiple blocks; there is also a backup GPT partition table at the very end of the disk. This often causes a surprise if you are used to wiping the partitioning off a disk by just zeroing a number of blocks at the beginning of a disk only.
Since partition type UUIDs are inconvenient for humans to use, different partitioning programs have used various methods to shorten them. gdisk will use four-digit type codes; Gparted represents the different partition type UUIDs by various flags (which is, in my opinion, an unfortunate choice).
A native UEFI-style boot process is also very different from classic BIOS-style boot process:
A BIOS-style boot process begins by (usually) assigning the BIOS device ID 0x80 (=first BIOS hard disk) to the device that is currently selected by the BIOS settings as the boot drive. When booting in UEFI style, the firmware settings ("BIOS settings" in an UEFI system) define a boot path: it can take many forms, but the most common one for installed operating systems will specify a partition UUID and a boot file pathname.
When booting BIOS-style, the firmware checks for a 2-byte boot signature at the end of block #0 of the selected boot disk, and then just executes the about 440 bytes of machine code that fits in the MBR block in addition of the actual partition table. When booting UEFI-style, the firmware has a built-in capability to understand some types of filesystems: the UEFI specification says a compliant UEFI firmware must understand FAT32, but it may understand other filesystem types too. An UEFI "bootable disk" must contain a partition with a special type UUID: this is called the EFI System Partition, or ESP for short. The firmware will look for an ESP partition whose unique UUID matches the one specified by the boot path, and then attempts to load the specified boot file from that partition.
When booting UEFI-style from a removable media, or from a disk that has not previously been configured to the firmware settings, the firmware looks for an ESP partition that contains a filesystem the firmware can read, and a file with a particular pathname. For 64-bit x86 hardware, this UEFI fallback/removable media boot path will be \EFI\boot\bootx64.efi when expressed in Windows-style, or <ESP mount point>/EFI/boot/bootx64.efi in Linux-style.
The ESP partition has a standard structure: each installed OS must set up a sub-directory \EFI\<vendor or distribution name>\ and only place their bootloader files within it. The \EFI\boot\ sub-directory is reserved for the fallback/removable-media bootloaders, which follow the Highlander rule: there can be only one (for each system architecture).
By setting the GParted "boot flag" on a non-ESP partition, you effectively changed the type UUID of that partition to ESP type UUID. That was a mistake: now the disk has two partitions with type ESP. You should change the type of the partition you changed back to what it originally was. In GParted, that would mean removing the "boot" and "esp" flags; in gdisk, it would probably mean setting the type code to 8300 ("Linux filesystem") or perhaps 8304 ("Linux x86-64 root").
Since you also have Windows on the same disk, trying to use a BIOS-Boot partition (gdisk type code ef02) is not recommended: that would usually force you to go to firmware settings and enable/disable CSM each time you wanted to switch between operating systems. Instead, you would want to use the live Linux boot media to mount your on-disk installation to e.g. /mnt, and then chroot to it to replace the current BIOS-style bootloader (usually GRUB with the i386-pc architecture type) with a native UEFI one (e.g. GRUB with x86_64-efi architecture type). Basically (all the following commands as root):
mount <your root filesystem device> /mnt
mount -o rbind /dev /mnt/dev
mount -t proc none /mnt/proc
mount -t sysfs none /mnt/sys
chroot /mnt /bin/bash
Now your session will be using the environment of your installed Linux OS, and you should be able to use package manager and any other tools pretty much as usual (caveat: if you have parts of the standard system like /var as separate partitions, mount them now too!)
The first step should be adding a mount point for the ESP and mounting it. First run lsblk -o +UUID to find the UUID of your ESP partition; since its filesystem type is most likely FAT32, it should be of the form xxxx-yyyy. Replace <ESP UUID> in the following commands with the actual UUID:
mount UUID=<ESP UUID> /boot/efi
echo "UUID=<ESP UUID> /boot/efi vfat umask=0077,shortname=winnt,flush 0 2" >>/etc/fstab
The next step is switching the bootloader type.
Unfortunately you didn't mention which Linux distribution you're using. If it's Debian, or Ubuntu, or some distribution derived from those, it would be a matter of using the standard package management tools to remove the grub-pc and grub-pc-bin packages and install grub-efi-amd64 and grub-efi-amd64-bin in their stead, then running grub-install /dev/sda (or whichever disk contains your ESP partition), and finally running update-grub to rebuild the GRUB configuration.
At this point, you can exit the chroot, undo the mounts and see if your system can boot now.
(if you had to mount any extra partitions, unmount them now)
exit
umount /mnt/dev
umount /mnt/proc
umount /mnt/sys
umount /mnt
reboot
You might also want to install the efibootmgr utility, as it allows you to view, backup and modify the firmware boot settings while Linux is running. (Windows can do the same with its bcdedit command, but in my opinion that command is much more awkward to use than efibootmgr.)
| MBR converted to GPT based system, how to boot Linux and Windows |
1,501,411,620,000 |
In Linux Mint 18.3 which boots from an HDD, I want to mount an external SSD.
When I run the command sudo fdisk -l, I get all the drives and partitions as well as the SSD and when I run sudo blkid, I get the type and UUID of each of them. I know that gpt and mbr are the partition scheme for storage drives and ext4 or other extension of ext are the file system types in linux . However, the type for ssd is mentioned as gpt in the results from the aforementioned commands.
I tried to mount the SSD by editing the fstab file in /etc/fstab and adding this line ( I set the mount point as /media/ssd-mountpoint ) :
uuid=<the uuid I got from blkid command> /media/ssd-mountpoint gpt defaults 0 2
After using mount -a, however I get the error "the gpt type is unknown".
How can solve this to mount the ssd with gpt type?
Should I convert this format?
|
For the solution to this problem, there are some information in various tutorials. The following steps are prerequisite towards making the new SSD usable:
1. Partition
2. Create a File System and Format
3. Mount
Number of partitions, into which the SSD is divided, is optional. In this problem, it is inteded to divide it into a big single partition. Also, the file system type has been chosen as ext4. You can use either extensions of ext if you are going to use this partition only in Linux. The referrence to the complete (graphically or through the command line) solution with the commands for each step is made here:
Installing a New Storage Drive
| How to mount a device of gpt type? |
1,501,411,620,000 |
I just bought two new 4TB external USB disks for backups
http://www.bestbuy.com/site/wd-my-passport-4tb-external-usb-3-0-portable-hard-drive-black/5605533.p
that came performatted with a single large ms partition. I'm running slackware 14.2x64, and ran gdisk to d(elete) that partition and make three n(ew) 1.2TB partitions (just dividing the total sectors by three). Then I w(rote) the partition table info and gdisk exited. And then both fdisk -l and gdisk -l /dev/sdb showed everything looking exactly like I'd expected it should.
But then mkfs -t ext4 /dev/sdb1 said it saw the original ms partition, and asked whether or not to proceed. I said no, and tried gdisk several more times, d(eleting) and re-n(ewing) all three partitions. Also tried sync, and tried unplugging the drive and re-plugging it. Nothing worked. I finally tried letting mkfs start to format the ms partition it reported, and killed it after a minute. Then re-ran gdisk yet again. And now, finally, mkfs saw the new partition table. And everything proceeded smoothly.
But what was I doing wrong? That is, how do you run gdisk so that the subsequent mkfs correctly and immediately sees the partition table you just w(rote) using gdisk? I wouldn't think that what I ended up doing is the recommended procedure.
|
The kernel is still using the old partition table.
Issue partprobe for the kernel to use the new partition table or reboot.
See man partprobe for the gory details.
EDIT (thanks to comments):
gdisk prints the following Warning message informing you that the kernel is still using the old partition table, inviting you to restart.
Warning: The kernel is still using the old partition table.
The new table will be used at the next reboot.
The operation has completed successfully.
fdisk and parted (including gparted) do the partprobe automatically and inform you whether it succeeded or not.
| linux gdisk (on 4TB USB drive) followed my mkfs -- but mkfs doesn't see new partitions |
1,501,411,620,000 |
parted utility somehow detects the file system on partitions on my GPT disk:
I guess it does not do this based on partition type codes(seen in the gdisk output) because those would be 27(Hidden NTFS Win) for partitions 1, 5, 6, 7 and for example ef(EFI) for /dev/sda2 but in parted output there are clearly different file systems listed.
|
It looks at the data on the partition, similar to what file -s /dev/partition does. If you strace it you should see things like this:
lseek(3, 1048576, SEEK_SET) = 1048576
read(3, "\353<\220mkfs.fat\0\2\10..., 512) = 512
A seek to position 1048576 (1 MiB or 2048 sectors) is outside the partition table (it's the start of the first partition), and it reads from there, so it's looking at other things than just the partition table itself.
It also looks at /proc/mounts, so it could collect information from there as well. What I don't see it doing is what blkid does.
The filesystem information shown by parted is not terribly accurate, the above example shows as FAT filesystem but it's actually being used as MD-RAID / LUKS / LVM. The metadata of MD-RAID is 4k from the start so an old FAT header in the first 4k might survive and confuse heuristics like those of file or parted...
| How does "parted" know the file-system type for GPT partitions? |
1,501,411,620,000 |
I have installed a new OS in the free space of the same hard disk after failing to upgrade the old one. Now there are two copies of boot partitions and EFI System partitions. After a harrowing experience deleting the partition containing the old OS where old swap is located, I am not confident to remove the rest of the partitions related to it.
Despite reading up Wikipedia articles pertaining to UEFI and GPT, I don't think I have gained sufficient overview of how the booting process works. Especially how the UEFI bootloader manages to find the correct EFI System partition (I have two) to boot up the new OS. Now I hope someone can enlightenment me on this issue so that I can gain some confidence and decide whether it is safe to delete the old boot (sda1) and EFI System (sda2) partitions.
|
I don't have direct answer, but what I would try if the partition needs to be reclaimed is:
first make sure you can boot the system from a CD
backup the EFI System partitions (to another partition), using dd
reformat one of them and reboot (without CD)
If the system does not come up, you can reboot from CD and restore the partition and try the other one.
Keep a written note of what you stored where, you will not have command history
| Is it safe to delete old boot and EFI System partitions? |
1,501,411,620,000 |
I am reading through the UEFI standard. On page 115, section 5 it discusses the GPT disk layout. I'm a bit confused as to exactly how this works. From the below, it sounds like UEFI will ignore the MBR.
A legacy MBR may be located at LBA 0 (i.e., the first logical block)
of the disk if it is not using the GPT disk layout (i.e., if it is
using the MBR disk layout). The boot code on the MBR is not executed
by UEFI firmware.
So is this basically saying if you put the firmware in legacy boot mode, this is how to define an MBR which will play nicely with that legacy boot mode? Am I correct in saying that if the system's firmware were in UEFI mode, then a system with an MBR defined as specified in chapter 5 would not be bootable?
|
So is this basically saying if you put the firmware in legacy boot mode, this is how to define an MBR which will play nicely with that legacy boot mode?
Yes, it's possible to have a disk that's boot table in both BIOS and UEFI mode. Many tools to create a bootable USB stick can do that
Am I correct in saying that if the system's firmware were in UEFI mode then a system with an MBR defined as specified in chapter 5 would not be bootable?
No, that part of the spec only says The boot code on the MBR is not executed by UEFI firmware which means the the 446-byte region in the MBR containing the binary instructions for booting the system won't be run in UEFI mode
It's still possible to boot from an MBR disk in UEFI mode if you create a proper ESP (EFI System Partition) on it. UEFI systems only boot executable images in the ESP
So by putting a proper BIOS boot loader in the MBR and a UEFI boot loader in the ESP you can have a disk that boots in either mode
| Can you use MBR with UEFI - a question about the UEFI specification |
1,501,411,620,000 |
I have an ext4 partition backed up with dd on a MBR hard drive that I would like to restore to a new GPT hard drive. Can I just create an empty partition of the exact same size on that new GPT drive and overwrite that partition with the one I want to restore or I have to do something else because the partition was backed up on a MBR drive?
Thanks.
Edit:
The partition on the MBR hard drive was primary. It was the third partition on the drive. I made the backup with dd.
|
Can I just create an empty partition of the exact same size on that
new GPT drive and overwrite that partition with the one I want to
restore?
Yes.
The MBR/GPT distinction is metadata stored outside of the partition. So when you made a partition backup using dd you only backed up the content of that partition (the filesystem), which does not include anything about the partitioning scheme.
| Using dd to restore a partition backup from MBR disk to GPT one |
1,580,829,756,000 |
I will be installing Windows 10 and Linux (dual-boot) on a new computer in a couple of days. I would like to use GPT instead of MBR for the partition table.
As I understand it (and have done in the past), it is much easier to install Windows first (and let it try to dominate the machine 😊) followed by the Linux install with grub allowing the dual boot.
Can I use the live USB stick with Linux to run gparted and create the GPT partition table, then boot Windows from its USB installer? Will Windows "respect" the partition table that I have created?
|
Latest Windows can install automatically on GPT, and then proceed with Linux install as usual, modifying partitioning setup as required. Why would you partition first?
| Using gparted before installing Windows 10 |
1,580,829,756,000 |
I have just opened an external USB 3.0 hard disk enclosure and mounted the disk instead internally in a PC via SATA. Now, the Linux system stops finding the GPT which was certainly there. Since there are already 2 TB of data on the disk it would be nice to find the partition table which is already there.
Can the location of the GPT change when using a different interface (USB, SATA)? How can it be fixed?
Here is the gdisk output mounted in the PC (SATA):
# gdisk /dev/disk/by-id/ata-TOSHIBA_DT01ABA300_123456890
GPT fdisk (gdisk) version 1.0.1
Partition table scan:
MBR: protective
BSD: not present
APM: not present
GPT: not present
Creating new GPT entries.
Command (? for help): q
The disk is a TOSHIBA DT01ABA300 (as you can see above) and was in a Toshiba Canvio USB3 enclosure.
Here is the relevant portion of dmesg:
[ 1.618441] scsi host9: ahci
[ 1.618485] ata9: SATA max UDMA/133 abar m512@0xfd1ff000 port 0xfd1ff100 irq 42
[ 2.106001] ata9: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[ 2.107329] ata9.00: ATA-8: TOSHIBA DT01ACA300, MX6OABB0, max UDMA/133
[ 2.107332] ata9.00: 5860533168 sectors, multi 0: LBA48 NCQ (depth 31/32), AA
[ 2.108712] ata9.00: configured for UDMA/133
[ 2.609553] scsi 9:0:0:0: Direct-Access ATA TOSHIBA DT01ABA3 ABB0 PQ: 0 ANSI: 5
[ 2.609699] sd 9:0:0:0: [sdg] 5860533168 512-byte logical blocks: (3.00 TB/2.72 TiB)
[ 2.609703] sd 9:0:0:0: [sdg] 4096-byte physical blocks
[ 2.609785] sd 9:0:0:0: [sdg] Write Protect is off
[ 2.609788] sd 9:0:0:0: [sdg] Mode Sense: 00 3a 00 00
[ 2.609825] sd 9:0:0:0: [sdg] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[ 2.637653] sd 9:0:0:0: [sdg] Attached SCSI disk
Here is where I found the EFI (GPT?!) signature on the raw disk:
dd if=/dev/sdg bs=2M count=32 | hexdump -C | grep -w EFI
32+0 records in
32+0 records out
67108864 bytes (67 MB) copied, 0.447864 s, 150 MB/s
00001000 45 46 49 20 50 41 52 54 00 00 01 00 5c 00 00 00 |EFI PART....\...|
I did not find a gdisk parameter to read the GPT from a particular offset. How can I read it?
# parted /dev/sdg
GNU Parted 3.2
Using /dev/sdg
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) unit b
(parted) print
Error: /dev/sdg: unrecognised disk label
Model: ATA TOSHIBA DT01ABA3 (scsi)
Disk /dev/sdg: 3000592982016B
Sector size (logical/physical): 512B/4096B
Partition Table: unknown
Disk Flags:
(parted) q
And:
# losetup --find --show /dev/sdg
/dev/loop0
# parted /dev/loop0
GNU Parted 3.2
Using /dev/loop0
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) unit b
(parted) print
Error: /dev/loop0: unrecognised disk label
Model: Loopback device (loopback)
Disk /dev/loop0: 3000592982016B
Sector size (logical/physical): 512B/512B
Partition Table: unknown
Disk Flags:
(parted)
|
Can the location of the GPT change when using a different interface (USB, SATA)?
Yes, because GPT is stupid and depends on sector size, and some USB enclosures claim 512b sectors when it's really 4096b sectors or vice versa.
Yes, because Linux is stupid and does not support GPT for differing block sizes even though it should be possible to detect this automatically.
You might have to re-create the partition table to convert from GPT-512 to GPT-4096 and hope the partitions were MiB-aligned to allow such conversions in the first place.
| Does the (GPT) partition table location change when moving from USB3 to SATA? |
1,580,829,756,000 |
I've got an external disk with 6 partitions: 4 for linux, one storage in HFS+, and one storage in ext4. I'd like to delete the ext4 storage one and move it's resulting unallocated space into my HFS+ one, but in GParted, I delete the ext4, and it becomes unallocated. But when I try to resize my HFS+, I can't enter a new value for "New size (MiB)", and the up arrow for it's size is disabled. How can I do this?
My partitions:
/dev/sdb1 ext4 /boot 476.84 MiB
/dev/sdb2 linux-swap 1.86 GiB
/dev/sdb3 ext4 / 9.31 GiB
/dev/sdb4 ext4 /home 46.57 GiB
/dev/sdb5 hfs+ SodiumOxide 523.32 GiB (89.81 used, 433.51 unused)
/dev/sdb6 ext4 WiiMC 14.65 GiB (5.36 used, 9.29 unused)
I'd like to delete sdb6 and add it's resulting unallocated space to sdb5 (SodiumOxide)
|
You can't enlarge it with GParted because it currently does not support HFS+ partition "grow". It only supports HFS+ "shrink". See
Gparted features
or, on your machine:
GParted >> View >> File System Support
| Can't increase partition size with GParted? |
1,580,829,756,000 |
In answer to installing grub2 on UEFI GPT:
In brief, on an EFI-based system, you do not install anything in the
MBR; instead, you install a Linux EFI boot loader or boot manager in
the EFI System Partition (ESP) and set it as the EFI's default boot
program using a tool such as efibootmgr (in Linux), bcfg (in an EFI
shell), bcdedit (in Windows), or the EFI's own user interface.
How can I do this step in the most risk free way possible? I would want to install a Linux EFI boot loader or boot manager in the ESP and set it as the EFI's default boot program using efibootmgr.
What needs to be backed up for a Windows 10 system prior to making this change? The UEFI boot entries?
GRUB 2 would be the typical choice for a boot loader?
|
GRUB is quite common, yes; grub-install (no arguments required) will call efibootmgr for you but feel free to experiment with the latter reading out the NVRAM using e.g. ALT Rescue; Rod's book on EFI is a well-formed well of knowledge on the topic, highly recommended.
Backing up whole disk is the most safe as usual, and the minimalistic measure is backing up EFI System Partition (the FAT32 one) along with your data.
| how do I install GRUB into the ESP with efibootmgr? |
1,580,829,756,000 |
I have a hard drive which is encrypted using LUKS. It was originally an external hard drive. Recently I removed the casing and connected it directly (via SATA). However, when I connect it directly, I'm unable to view the partition, and it doesn't prompt for the password. Out of 4 TB, it shows an unknown partition of 500GB and free space of 3.5TB.
I removed it from the system and connected it as an external hard drive again, and ubuntu detects the partitions, and prompts for the password.
Also, the partitioning is shown as MBR, when in reality it is GPT
|
It's probably a problem with the sector size. Some USB enclosures claim their drives have 4KiB sectors, when the drive represents itself as 512 byte sectors or vice versa. Partition tables (both msdos and gpt) unfortunately depend on the sector size. If the sector size changes, the partition table becomes invalid.
Now, this is a problem that could be solved in software - Linux could be made smart enough to interpret a GPT partition table correctly, regardless of the physical sector size the drive claims to have. But it doesn't do that, and it's probably not part of the standard, so ...
What you need to do is get the exact byte offsets of your partitions while in the USB closure
parted /dev/usbdrive unit b print free
and then see if those partition offsets work for the internal drive
losetup --find --show --read-only --offset 1048576 /dev/internaldrive
file -s /dev/loopX
and if that works out okay, re-create the partition table with the same (byte) offsets for the internal disk (make a backup of the first/last few megabytes of the disk first)
parted /dev/internaldisk unit b mklabel gpt mkpart 1048576 42424242 ...
I don't know if there is a partitioner that is smart enough to 'repair' such wrong-sector-size partition tables automagically. It would beat the manual approach but ...
| LUKS on an internal hard drive |
1,580,829,756,000 |
Fresh Arch Linux install on (hardware) RAID0 under 64-bit UEFI system with GPT partitions. Had to add
MODULES="ext4 dm_mod raid0"
HOOKS="base udev autodetect modconf block mdadm_udev filesystems keyboard fsck"
into /etc/mkinitcpio.conf so that partitions on RAID0 are recognized properly on boot. Otherwise,
ERROR: device 'UUID=<uuid>' not found. Skipping fsck.
ERROR: Unable to find root device 'UUID=<uuid>'.
...
would be issued.
There is one peculiarity however, and I don't know how to explain it. On the one hand, when /etc/fstab contains either /dev/* or UUID=* sources, Arch Linux boots normally. On the other hand, when it contains PARTUUID=* sources, a bunch of the corresponding Dependency failed errors (regarding mounting of those sources from /etc/fstab) happen on boot and it hangs.
Could you explain what's wrong about having PARTUUID=* in /etc/fstab in this case? Does that have something to do with RAID0?
$ cat /proc/mdstat
Personalities : [raid0]
md126 : active raid0 sda[1] sdb[0]
976768000 blocks super external:/md127/0 128k chunks
md127 : inactive sda[1](S) sdb[0](S)
4904 blocks super external:imsm
unused devices: <none>
$ dmsetup table
No devices found
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 465.8G 0 disk
└─md126 9:126 0 931.5G 0 raid0
├─md126p1 259:0 0 1G 0 md /boot/efi
├─md126p2 259:1 0 1G 0 md
├─md126p3 259:2 0 1G 0 md
├─md126p4 259:3 0 256G 0 md
├─md126p102 259:4 0 16G 0 md [SWAP]
├─md126p103 259:5 0 16G 0 md /
├─md126p104 259:6 0 16G 0 md /var
└─md126p105 259:7 0 256G 0 md /home
sdb 8:16 0 465.8G 0 disk
└─md126 9:126 0 931.5G 0 raid0
├─md126p1 259:0 0 1G 0 md /boot/efi
├─md126p2 259:1 0 1G 0 md
├─md126p3 259:2 0 1G 0 md
├─md126p4 259:3 0 256G 0 md
├─md126p102 259:4 0 16G 0 md [SWAP]
├─md126p103 259:5 0 16G 0 md /
├─md126p104 259:6 0 16G 0 md /var
└─md126p105 259:7 0 256G 0 md /home
sr0 11:0 1 1024M 0 rom
$ blkid
/dev/sda: TYPE="isw_raid_member"
/dev/sdb: TYPE="isw_raid_member"
/dev/md126p1: LABEL="EFI" UUID="722E-E4AB" TYPE="vfat" PARTLABEL="EFI system partition" PARTUUID="a8e94657-e6ea-4712-be06-ac9ffe6e2258"
/dev/md126p3: LABEL="Windows PE 5.0 (x64)" UUID="181C2F991C2F7144" TYPE="ntfs" PARTLABEL="Basic data partition" PARTUUID="15848c79-1456-418b-a243-830d0db894ce"
/dev/md126p4: LABEL="Windows 8.1 (x64)" UUID="AAB83149B83114F3" TYPE="ntfs" PARTLABEL="Basic data partition" PARTUUID="7d3a06f5-4c67-4299-80b0-029501e14f18"
/dev/md126p102: UUID="6a2d4998-3ac8-4135-9d72-47960b201d5d" TYPE="swap" PARTLABEL="Swap" PARTUUID="d418edd6-44eb-4058-921f-c68aa191c5ac"
/dev/md126p103: UUID="2c241730-a076-48d9-8d1f-6e10573a994f" TYPE="ext4" PARTLABEL="Arch Linux" PARTUUID="37200e1e-dea4-435a-a873-427e3ee8c494"
/dev/md126p104: UUID="8d4eff47-3a2b-46b4-9263-7bbf00d8d0db" TYPE="ext4" PARTLABEL="Variable" PARTUUID="cd15b1f0-e948-4975-9218-591efa5b9b95"
/dev/md126p105: UUID="e0b15e56-3846-4e75-96f8-4f75058b4a6b" TYPE="ext4" PARTLABEL="Home" PARTUUID="54e85323-522c-415a-b7bd-2eb83b6b4ee6"
/dev/md126: PTUUID="e4e1b9b8-c26f-416d-82d9-e9350d0b5ac2" PTTYPE="gpt"
/dev/md126p2: PARTLABEL="Microsoft reserved partition" PARTUUID="6e9264fd-da04-4966-b8e0-8f3124f47050"
|
Since it's now clear you're running software raid ("fake raid", where the firmware/BIOS also has a software RAID implementation to make booting Windows off of it easier—in this case, Intel Matrix Storage), you're probably seeing some bug in Arch's initramfs w/r/t partitioning md arrays.
True hardware raid is almost entirely transparent to the OS; e.g., you would see only one device, the RAID array, not one device per disk. A hardware RAID array looks just like a normal disk to the OS, at least once you've got the RAID driver installed (without it, the OS just doesn't see it at all).
For quite a while, you couldn't partition md arrays at all (it was common—still is—to use LVM on top of them, or to create multiple arrays); later, you could set up a partitionable one, but it wasn't the default; nowadays they can all be partitioned. But probably something still has an assumption about them not being partitionable, and is looking for that partuuid on a physical disk, not the RAID array.
Personally, I'd not worry about it and just use the UUID instead. Also, in general, for a Linux-only box, is usually better to not use the "fake raid" at all, and just use Linux mdraid directly with its native formats. With RAID-0, I'm sure you'll have a chance to rebuild the box soon enough...
| 'PARTUUID' in '/etc/fstab' and (hardware) RAID0 don't play well together, do they? |
1,580,829,756,000 |
I am trying to learn and especially understand how partitionning and boot-loaders work. The problem is that I got it all twisted in my mind. In the end I don't understand anything anymore.
I know how to partition a hard drive using fdisk, parted, gdisk.
I tried chainloading iso files (such as ubuntu.iso, arch.iso) with syslinux.
To illustrate my confusion, here is what I have done :
Creating a linux partition :
$ gdisk /dev/sdb
Command (? for help): n
Partition number (1-128, default 1):
First sector (34-7821278, default = 36) or {+-}size{KMGTP}:
Last sector (36-7821278, default = 7821278) or {+-}size{KMGTP}:
Current type is 'Linux filesystem'
Hex code or GUID (L to show codes, Enter = 8300):
Changed type of partition to 'Linux filesystem'
Command (? for help): p
Disk /dev/sdb: 7821312 sectors, 3.7 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): F7F2BE49-B8D8-4910-8E69-381DEBD954DC
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 7821278
Partitions will be aligned on 4-sector boundaries
Total free space is 2 sectors (1024 bytes)
Number Start (sector) End (sector) Size Code Name
1 36 7821278 3.7 GiB 8300 Linux filesystem
Command (? for help): w
Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
PARTITIONS!!
Do you want to proceed? (Y/N): Y
OK; writing new GUID partition table (GPT) to /dev/sdb.
The operation has completed successfully.
Then I formatted this partition as an ext2 :
$ mkfs.ext2 /dev/sdb1
Now I want to install MBR with syslinux (taken from the very few tutorials I found)
$ syslinux -m /dev/sdb1
syslinux: invalid media signature (not a FAT filesystem?)
So it needs to be a FAT partition. However I read that syslinux supports Fat32, ext2, ext3, ext4 file (https://wiki.archlinux.org/index.php/syslinux#Installation)
1) What is wrong here, since syslinux is supposed to support ext2 partitions?
So I formatted the partition as a Fat32 partition :
$ mkfs.vfat -F 32 /dev/sdb1
Now installing the syslinux MBR works:
$ syslinux -m /dev/sdb1
$ syslinux -i /dev/sdb1
2) Do I have to install a MBR, isn't syslinux compatible with GPT? I read on documentations that GPT has more advantages over MBR, such as allowing the creation of way more primary partitions. Did I misunderstand?
I then found that I need to flag the partition as bootable (http://www.linuxquestions.org/questions/linux-general-1/booting-iso-images-from-a-usb-disk-917161/). Can I do that with gdisk ? It seems to me it is not possible as the manual does not talk about boot flagging. In the other hand, fdisk allows me to do so. However here is another issue :
$ fdisk /dev/sdb
WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util fdisk doesn't support GPT. Use GNU Parted.
3) Does gdisk automaticaly create a GPT ?
$ gdisk /dev/sdb
GPT fdisk (gdisk) version 0.8.8
Partition table scan:
MBR: protective
BSD: not present
APM: not present
GPT: present
4) Where does this MBR come from? How can MBR and GPT coexist like this?
As you can see, as soon as I tried doing more in depth partition manipulations, I realized everything was mixed up. I would sincerely appreciate if you could answer my questions and especially provide me with additional documentation : https://wiki.archlinux.org and http://www.syslinux.org/wiki actually made my understanding worse than ever. Many thanks.
|
1) What is wrong here, since syslinux is supposed to support ext2 partitions?
Yes, Syslinux supports ext2 fs via Extlinux. If you are using a UEFI/EFI based system then you need a fat32 partition. For GPT only you don't need to have a fat32 partition, just go with the traditional. i.e. ext?
2) Do I have to install a MBR, isn't syslinux compatible with GPT? I read on documentations that GPT has more advantages over MBR, such as allowing the creation of way more primary partitions. Did I misunderstand?
It's up to you what do you want to use, both partition table msdos and gpt are supported.
In case of GPT you can use gdisk to set legacy bios boot flag. It's necessary to have a legacy bios boot flag on boot partition. After entering in gdisk menu use 'x' to go into expert mode and then use 'a' to set attributes.
3) Does gdisk automaticaly create a GPT ?
Yes, Visit http://linux.die.net/man/8/gdisk
For How To, visit http://wiki.gentoo.org/wiki/Syslinux
| Understanding syslinux and partitioning |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.