date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,428,257,586,000 |
Is it possible to limit a specific user's processes to always run with a niceness of at least 15, say?
|
You can set per-user or per-group nice values using pam_limits and the /etc/security/limits.conf file.
e.g.
username hard priority 15
This only affects PAM services which are configured to use the pam_limits module. Depending on your distribution, this is probably already enabled for services like login, cron, atd, sshd, and others. Or you may have to enable it by adding a line like the following to, e.g., /etc/pam.d/login:
session required pam_limits.so
See man pam_limits and the comments in /etc/security/limits.conf for details.
If you have the PAM doc package installed, there may be additional documentation at /usr/share/doc/libpam-doc/html/sag-pam_limits.html
| Is it possible to impose a minimum niceness on a specific user? |
1,428,257,586,000 |
How would one go about determining the amount of space allocated to their system for command lines. Is there anything else I need to be concerned with?
|
xargs
One method that I'm aware of is to use xargs to find this information out.
$ xargs --show-limits --no-run-if-empty < /dev/null
Your environment variables take up 4791 bytes
POSIX upper limit on argument length (this system): 2090313
POSIX smallest allowable upper limit on argument length (all systems): 4096
Maximum length of command we could actually use: 2085522
Size of command buffer we are actually using: 131072
getconf
The limit that xargs is displaying derives from this system configuration value.
$ getconf ARG_MAX
2097152
Values such as these are typically "hard coded" on a system. See man sysconf for more on these types of values. I believe these types of values are accessible inside a C application, for example:
#include <unistd.h>
...
printf("%ld\n", sysconf(_SC_ARG_MAX));
References
ARG_MAX, maximum length of arguments for a new process
| How does one determine the amount of space allocated to their command line? |
1,428,257,586,000 |
GoogleBot is hitting my server hard - and even though I have set the CrawlRate in Webmaster Tools it is still hiking up the load on my server and slowing down Apache for the rest of the normal web traffic.
Is it possible to limit / rate-limit connections per second / minute using UFW based on a user agent string?
If not how can I do it for GoogleBot's IP ranges?
|
You cannot do this with ufw directly, but you need to add the right iptables rules to /etc/ufw/before.rules.
I suggest you to learn iptables. As a (not optimized) starting point something like
-A ufw-before-input -p tcp --syn -dport 80 -m recent --name LIMIT_BOTS --update --seconds 60 --hitcount 4 --rcheck -j DROP
-A ufw-before-input -p tcp -dport 80 -m string --algo bm --string "NotWantedUserAgent" -m recent --name LIMIT_BOTS --set ACCEPT
could work, where you of course need to replace NotWantedUserAgent with the correct one.
This rules should limit the number of new connections per minute from a specific bot - I have not tested them and do not know if they really reduce the workload from a specific bot.
| Can I limit connections per second for certain UserAgents using UFW? |
1,428,257,586,000 |
I am running Debian wheezy. File limits are increased to 100000 for every user.
ulimit -a and ulimit -Hn / -Sn show me the right amounts of maximum open file limits even in screen.
But for some reason I am not able to to have more than ~4000 connections / open files.
from sysctl.conf:
net.ipv4.tcp_fin_timeout = 1
net.ipv4.tcp_tw_recycle = 1
net.ipv4.ip_local_port_range = 500 65000
net.core.somaxconn = 81920
Output of ulimit -a:
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 256639
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 999999
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 256639
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
for example redis:
client: benchmark with 100 clients
Writing to socket: Connection reset by peer
Writing to socket: Connection reset by peer
Writing to socket: Connection reset by peer
Writing to socket: Connection reset by peer
Error: Connection reset by peer
server info:
127.0.0.1:6379> info clients
-Clients
connected_clients:4005
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0
Java:
Caused by: io.netty.channel.ChannelException: Failed to open a socket.
Caused by: java.net.SocketException: Too many open files
at sun.nio.ch.SelectorProviderImpl.openSocketChannel(Unknown Source)
java.io.IOException: Too many open files
Caused by: io.netty.channel.ChannelException: Failed to open a socket.
Caused by: java.net.SocketException: Too many open files
at sun.nio.ch.SelectorProviderImpl.openSocketChannel(Unknown Source)
Caused by: io.netty.channel.ChannelException: Failed to open a socket.
Caused by: java.net.SocketException: Too many open files
ls -l /proc/[id]/fd | wc -l shows ~4000 descriptors
|
There are two settings that limit the number of open files: a per-process limit, and a system-wide limit. The system-wide limit is set by the fs.file-max sysctl, which can be configured in /etc/sysctl.conf (read at boot time) or set on the fly with the sysctl command or by writing to /proc/sys/fs/file-max. The per-process limit is set by ulimit -n.
The per-process limit is inherited by each process from its parent. A default value can be set in /etc/security/limits.conf, but this only applies to interactive sessions, not to daemons started at boot time. It will apply to a daemon only if it's started via an interactive session.
To increase (or decrease) per-process limits for a daemon, in general, edit its startup script and add a call to ulimit just before the daemon is started. The Debian redis package comes with a configuration setting in a separate file: /etc/default/redis. Comment out the ULIMIT= line and increase the value if necessary.
| Daemon's open file limit is reached even though the system limits have been increased |
1,428,257,586,000 |
As an experiment on a test system, I tried to limit my own number of processes using /etc/security/limits.conf. When logged on the system I had 16 processes to my name running (all had ruid, euid and suid to my uid).
I tried first setting the hard and soft limit to 20. I logged out and I could not log back in again because I could not create processes. I raised the limit to 30 and I still could not get in. When I raised the limit to 50 processes I could get in, but zsh threw some errors. I found I could create 2 more processes and that was it.
My question is, why if I set the limit to N (in this case, 20) it does not exactly enforce N processes as a limit? does it trigger the limit if the user is some% close to it? Otherwise I don't understand why it would not let me create more processes when I still had room under the limit.
Running Linux 4.19 on a standard Debian (systemd based)
EDIT:
To count the processes I did try:
ps ux: This yields 14 processes
and for good measure
cat /proc/*/status | grep Uid | grep 1000 | wc -l: Which yields 16 processes.
The difference expected due to the extra processes the oneliner uses.
The output of the grep 1000 (my uid) is:
Uid: 1000 1000 1000 1000
Uid: 1000 1000 1000 1000
Uid: 1000 1000 1000 1000
Uid: 1000 1000 1000 1000
Uid: 1000 1000 1000 1000
Uid: 1000 1000 1000 1000
Uid: 1000 1000 1000 1000
Uid: 1000 1000 1000 1000
Uid: 1000 1000 1000 1000
Uid: 1000 1000 1000 1000
Uid: 1000 1000 1000 1000
Uid: 1000 1000 1000 1000
Uid: 1000 1000 1000 1000
Uid: 1000 1000 1000 1000
Uid: 1000 1000 1000 1000
Uid: 1000 1000 1000 1000
Which shows that all are running as real, effective, saved and fsuid 1000 (me)
I believe I have exactly 13 processes because I trust the ps and it counts itself, so it should be 13 right?
|
As per man 2 setrlimit:
RLIMIT_NPROC
This is a limit on the number of extant process (or, more precisely on Linux, threads)
Which is, arguably, somewhat counter-intuitive. In any case, it might be that the login process spawns a number of threads that trips the limit when it is set to the order of 20 or 30.
I tested logging in, then lowering the limit, and forking simple, single-threaded processess until I got an error. The limits behaved as expected.
| Linux not enforcing limits correctly? |
1,428,257,586,000 |
I'm trying to set up an older version of ocaml, and I'm getting an error message that says that I need to increase my stack size. The only way I've found to do this in cygwin involves running an additional argument with gcc, but the instructions I'm following have me using a makefile to compile the program. Since I'm not manually typing the gcc command, I'm not sure where to add that argument.
Here's a pastebin of the installation process: http://pastebin.com/j2Q45pKm
$ make world.opt
…
The current stack size limit is too low (2026k)
You must increase it with one of the following commands:
Under sh, bash, zsh: ulimit -s 3072
Under csh, tcsh: limit stacksize 3072
Makefile:621: recipe for target 'checkstack' failed
make: *** [checkstack] Error 3
And here's the argument I'm trying to pass: https://stackoverflow.com/questions/156510/increase-stack-size-on-windows-gcc
Because this is Cygwin, ulimit doesn't work to increase the stack size.
|
Run ./configure -cc "gcc -Wl,--stack,16777216" (plus any other option you want) if you want to always run gcc with the argument -Wl,--stack,16777216 during the compilation process. After that, run make clean, then make world.opt again. You need to clean all previously generated binaries (not the byte compiled files, but it's easier to just do make clean) so that they're regenerated with the new stack size option.
The Ocaml makefile doesn't use the common CC and CFLAGS conventions, because it can use different compilers with different options for different parts of the build process. Building compilers tends to be a bit peculiar.
| Increasing stack size in 64 bit Cygwin? (installing ocaml) |
1,428,257,586,000 |
Is there a way to restrict CPU time (duration) for all processes which are invoked by executables that are located in a certain directory?
I would like to be able to auto-kill all applications which certain users start in their home directories after a certain amount of time (for example after 10 minutes).
|
What you want is in this answer and this answer
The only thing I'll add is the -u user option for ps eg:
ps -u <username>
to search processes started by a user.
| Restricting CPU time of processes by executable path |
1,428,257,586,000 |
So per POSIX specification we have the following definition for *:
Expands to the positional parameters, starting from one, initially
producing one field for each positional parameter that is set. When
the expansion occurs in a context where field splitting will be
performed, any empty fields may be discarded and each of the non-empty
fields shall be further split as described in Field Splitting. When
the expansion occurs in a context where field splitting will not be
performed, the initial fields shall be joined to form a single field
with the value of each parameter separated by the first character of
the IFS variable if IFS contains at least one character, or separated
by a if IFS is unset, or with no separation if IFS is set to a
null string.
For a vast majority of people we are aware of the famous ARG_MAX limitation:
$ getconf ARG_MAX
2621440
which may lead to:
$ cat * | sort -u > /tmp/bla.txt
-bash: /bin/cat: Argument list too long
Thankfully the good people behind bash ([include all POSIX-like others]) provided us with printf as a built-in, so we can simply:
printf '%s\0' * | sort -u --files0-from=- > /tmp/bla.txt
And everything is transparent for the user.
Could someone please let me know why this is so trivial to bypass the ARG_MAX limitation using a built-in command and why it is so damn hard to provide a conforming POSIX shell interpreter which would handle gracefully * special parameter to a standalone executable:
$ cat *
Would that break something ? I am not asking bash people to provide cat as a built-in, I am solely interested in the order of operations and why is * expanded in different behavior depending whether the command is build-in or is a standalone executable.
|
The limitation is not in the shell but in the exec() family of functions.
The POSIX standard says in relation to this:
The number of bytes available for the new process' combined argument and environment lists is {ARG_MAX}. It is implementation-defined whether null terminators, pointers, and/or any alignment bytes are included in this total.
To run utilities that are built into the shell, the shell will not need to call exec(), so it is unaffected by this limitation.
Notice, too, that it's not simply the length of the command line that is limited, but the combination of the length of the command, its arguments, and the current environment variables and their values.
Also notice that printf is not a built in utility in e.g. pdksh (which happens to act as sh and ksh on OpenBSD). Relying on it being a built-in will need to take the specific shell which is being used into account.
| Command line length limit: built-in vs executable |
1,428,257,586,000 |
When i initially login to a server, I see the following error message.
ullimit: coredumpsize: Can't set limit (Operation not permitted)
Further, when I try to copy files into this machine, I see the same error,
cat .ssh/no_pass_rsa.pub | ssh user@server 'cat >> .ssh/authorized_keys'
user@server's password:
limit: coredumpsize: Can't set limit (Operation not permitted)
I read on a lot of blog posts that I must increase the hard limit and soft limit for the users. Trying the following command to check ulimit, gives the output as:
server> ulimit
ulimit: Command not found.
I could find no post where a user faced the issue of this operation not existing.
Also, I checked the limit and noticed that the coredumpsize is 0kbytes
$server> limit
cputime unlimited
filesize unlimited
datasize unlimited
stacksize 33000 kbytes
coredumpsize 0 kbytes
memoryuse unlimited
vmemoryuse unlimited
descriptors 1048576
memorylocked 64 kbytes
maxproc 1030357
maxlocks unlimited
maxsignal 1030357
maxmessage 819200
maxnice 0
maxrtprio 0
maxrttime unlimited
How do I increase the coredump size or modify it to resolve this? Is there any other solution?
|
ulimit command not found - occurred as only the root user has the privileges to run this command and I was trying to execute it as a normal user. By running this as root, the command executed successfully.
Error Resolved - coredumpsize: Can't set limit (Operation not permitted)
The issue was resolved by editing the /etc/security/limits.conf and modifying the lines for setting core limits (as root user)
File : /etc/security/limits.conf
Initially :
# End of file
##### Begin HR 424923 ######
* soft nofile 8192
* hard nofile 8192
##### End HR 424923 ########
# Limit core dumps
* soft core 0
* hard core 0
Finally :
# End of file
##### Begin HR 424923 ######
* soft nofile 8192
* hard nofile 8192
##### End HR 424923 ########
# Limit core dumps
* soft core 65535
* hard core 65535
| ulimit command not found (without sudo) and error - coredumpsize: Can't set limit (Operation not permitted) |
1,428,257,586,000 |
is it possible to limit number of processes for a given group or user using the process name? Eg. I'd like to groups remotes have only 5 simultaneous ssh processes that are run on my server.
I don't see any options in pam_limit (I can only limit number of process per user or group, regardless of process name) and I don't see ability in cgroups.
Do you have any ideas how to accomplish this? (script in cron is not an answer for me :))
|
No, it is not possible to limit by process name, because the process name can be changed easily.
So that limit could easily be evaded.
(It can even be changed at runtime I think.)
| Limiting number of processes by name |
1,428,257,586,000 |
This is the situation:
I have a PHP/MySQL web application that does some PDF processing and thumbnail creation. This is done by using some 3rd party command line software on the server. Both kinds of processing consume a lot of resources, to the point of choking the server. I would like to limit the amount of resources these applications can use in order to enable the server to keep serving users without too much delay, because now when some heavy PDF is processed my users don't get any response.
Is it possible to constrain the amount of RAM and CPU an application can use (all processes combined)? Or is there another way to deal with these kinds of situations? How is this usually done?
|
Run it with nice -n 20 ionice -c 3
That will make it use the remaining CPU cycles and access to I/O not used by other processes.
For RAM, all you can do is kill the process when it uses more than the amount you want it to use (using ulimit).
| How to constrain the resources an application can use on a linux web server |
1,428,257,586,000 |
I'm trying to set default niceness/priority for a user's processes on Ubuntu 18.04.3 LTS in limits.conf and everything I write in limits.conf is simply ignored. Hard nice, soft nice, hard priority, soft priority, - priority, doesn't matter it just don't work.
session required pam_limits.so in /etc/pam.d/su was uncommented by default and I also tried to reboot the system after making changes in limits.conf.
Why is this happening and how do I fix it?
Example 1:
root soft nice -16
root hard nice -17
When I log into a server with ssh [email protected] I expect to see in top -o -NI output at least one process (my bash login session) with a NI of -16 or -17. None of the processes had this value of nice.
Example 2:
user123 soft nice 5
user123 hard nice 5
When I request a "http://host.name/benchmark.php" I expect to see in top -o %CPU output the PHP FastCGI process with niceness of 5. I see a PHP FastCGI process with niceness of 0
Example 3:
user123 soft priority 25
user123 hard priority 25
When I request a "http://host.name/benchmark.php" I expect to see in top -o %CPU output the PHP FastCGI process with priority of 25. I see a PHP FastCGI process with priority of 20
Example 4:
user123 - priority 25
When I request a "http://host.name/benchmark.php" I expect to see in top -o %CPU output the PHP FastCGI process with priority of 25. I see a PHP FastCGI process with priority of 20
|
Example 1. The documentation for nice(2) explains that "The range of the nice value is +19 (low priority) to -20 (high priority)". When you set the entry in limits.conf to -16/-17 that's effectively an upper limit that can be reduced to the values I assume you saw.
Examples 2, 3, 4. Your webserver is probably not calling PAM to change userid, so limits.conf isn't referenced.
| limits.conf is not working |
1,425,636,689,000 |
Addressing an error of Too many open files I was attempting to follow the suggestions here
Although /etc/sysctl.conf
fs.file-max = 70000
vm.swappiness = 10
and /etc/security/limits.conf
nginx soft nofile 10000
nginx hard nofile 30000
following changes to the sysctl command the errors are identical and
user@mo:~$ ulimit -Hn
4096
user@mo:~$ ulimit -Sn
1024
Ubuntu 14.04 environment. /etc/nginx/nginx.conf
[...]
events {
worker_connections 1024;
}
[...]
passenger_enabled on;
rails_env development;
root /home/user/app/current/public;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
location / {
# proxy to upstream server
proxy_pass http://127.0.0.1;
proxy_redirect default;
# track uploads in the 'proxied' zone
# remember connections for 30s after they finished
track_uploads proxied 30s;
}
location ^~ /progress {
# report uploads tracked in the 'proxied' zone
report_uploads proxied;
}
}
update as suggested /etc/pam.d/common-session
[...]
session required pam_unix.so
session optional pam_systemd.so
session required pam_limits.so
|
You need to complete some more steps to increase max open files in ubuntu.
Edit /etc/pam.d/common-session and append below line
session required pam_limits.so
Restart your system to apply the changes.
You can set limits to all users on system by adding below lines.
* soft nofile 10000
* hard nofile 30000
And reboot the system.
| number of open files configuration not operational |
1,425,636,689,000 |
My company literally has thousands of NFS volumes, which we need to mount on a few of our servers. However, we never mount all of them at the same time; typically we mount about 12 at a time. Most of the time, most of the NFS volumes are actually offline. In the past, we have been mounting and unmounting them manually via the command line, as the need arose.
We would like to move to using autofs for mounting these volumes.
Is it bad to define thousands of autofs rules, one for each NFS volume? Does autofs have a hard limit about the number of rules it can have? Does autofs performance decrease drastically with a large number of rules?
|
We have several maps, and some of the maps have 10s, 100s of volumes. So it does work for a fleet of 100s of Linux, 10s of other platforms. The only issue we had with old autofs software in the early days was that it could not cope with modifications and updates online. That is changing the NFS volume and mount point during decommissioning old NFS filers. I do not believe performance is an issue. However, one always has to test and confirm. Scalability and performance were enhanced with Linux Autofs 5.x after introducing multithreading. For us our minimal distro is RHEL6 comes already with 5.x. so no it is not bad, you need to benchmark and test it, it will add better flexibility and stability to your environment as the configuration will be centralised "I am hoping that is one reason you would like to do so."
| Is it bad to have thousands of autofs rules? |
1,425,636,689,000 |
This problem has now appeared twice on my production Ubuntu machine running both a node server (tiny) and a Spring Boot Java server (the workhorse). The first time it happened, grinding my server to a halt I found the file /proc/sys/fs/file-max had a value of 808286 which seemed totally reasonable to me BUT I increased it anyway to 2097152 and then don't recall what I did after that (this was a good year ago or so?) but I probably either restarted the server or at least my service and dusted my hands of the problem. Well today it just came back to haunt me again. Restarting my Java service has temporarily fixed the problem but I want to understand what is happening to avoid this in the future.
/etc/security/limits.conf file is the default one installed my Ubuntu and thus is just a file of comments so no need to reproduce here.
Java service is run by a user tomcat.
Apache is run by root. This directs traffic to either the node or Spring Boot services.
The result of some relevant commands on my system ...
$ ulimit -Sn
1024
$ ulimit -Hn
1048576
$ sudo su tomcat
$ ulimit -Sn
1024
$ ulimit -Hn
1048576
Side question, why is my hard limit half the value of that set in /proc/sys/fs/file-max?
But now if I look at the limits per process I get the following...
$ cat /proc/<tomcat_java>/limits
Limit Soft Limit Hard Limit Units
Max open files 4096 4096 files
$ cat /proc/<root_apache>/limits
Limit Soft Limit Hard Limit Units
Max open files 8192 8192 files
$ cat /proc/<my_random_process>/limits
Limit Soft Limit Hard Limit Units
Max open files 1024 1048576 files
So, what is going on here? The only processes that I can find that pay attention to the file-max that I have set are my own. (The "random" process I used above was my bash shell). Where are these limits for apache and java coming from. I can certainly see blowing the 4096 limit above (which is probably my problem) but I have no idea how to get it to use the system set limits.
Thanks for any help on this.
|
Your question looks like How to set ulimits on service with systemd? - the open files limit needs to be addressed in the start-up script of the Apache Tomcat server. For example: assuming that the Ubuntu machine uses systemd, one can increase the open files limit for the tomcat java process to 65000 by editing the start-up script file like this:
/etc/systemd/system/tomcat.service:
[Unit]
Description=Tomcat Service
After=syslog.target network.target
[Service]
Type=forking
User=tomcat
Group=tomcat
LimitNOFILE = 65000 <---------
....
[Install]
WantedBy=multi-user.target
Reload deamons and restart tomcat server...
systemctl daemon-reload
systemctl restart tomcat
Verify, something similar with...
pstree -pu |grep tomcat
|-java(23638,tomcat)-+-{java}(23645)
grep open /proc/23638/limits
Max open files 65000 65000 files
| "Too Many Open Files" for Apache and java/tomcat. How to set per-process limits? |
1,425,636,689,000 |
I have this perl script and I discovered the pv command and decided to use it to get some feedback into what is going on with the randomness in terms of throughput. After a few tests1 I decided to throttle the command, like so:
perl_commands < /dev/urandom | pv -L 512k | tr -cd SET
5.5MiB 0:00:11 [ 529kiB/s] [ <=> ]
I suspend to ram using systemctl suspend(Archbang). When I resume, the command still runs and includes the elapsed time since suspend in its dialog but it looks as if the limit I set is no longer enforced, throughput is 2-3MiB/s and CPU is higher - like without a limit. After some time, this subsides and I can see that the limit is still enforced.
For example, if I run the command for only a few seconds it'll take seconds for the throughput to come back to its set limit. On the other hand, generating 815Mb of data during an hour, then suspending for 30 mins, it then takes about 5 mins for the command to return to the limit I had set - and CPU usage is like with no throttling during that time.
So it is not that the limit isn't enforced, it's rather that coming out of suspend to ram seems to impact the throughput in this context. Why and can this behavior be changed?
1. The command uses one CPU core when not throttled. With a limit of 512KiB\s, CPU usage is about 10-15% or less. It takes about 2gb of randomness(and some time) to fill my 80x40 terminal window (depending on SET).
|
pv doesn't know about the system power states. All it sees is that the clock changed by a very large amount at some point.
My guess is that pv doesn't care if the amount of time between two clock readouts suddenly gets large and just calculates the throughput based on the time interval. Since the interval is very large, it appears that the throughput is very low.
The throughput calculation is averaged over a number of clock reads (about 5min in your observations). As long as the interval considered includes the time spent in suspension, the calculated throughput value will be very low. Once the interval again consists only of awake time, the throughput will be back to what is expected.
For example, suppose that you suspended for 5 minutes. Then just after resuming, pv calculates that 500kB were transferred in the last 5min, meaning a throughput of only about 1.7kB/s. That's way below the 500kB threshold so pv transfers more data for a while to compensate. Eventually the throughput calculation will stabilize again.
Suspending the system is not like suspending the pv process. Suspending the system is transparent for programs. Suspending the process sends it a SIGCONT signal when it wakes up. pv has a signal handler for SIGCONT which causes it to more or less subtract the time it spent suspended (I haven't check what exactly it does if it was suspended with an uncatchable SIGSTOP signal, but it shouldn't cause too big a perturbation, unlike system suspension).
| Why does it temporarily look like the pv command transfer limit is no longer enforced when I come out of suspend to ram? |
1,425,636,689,000 |
I've been building this (http://tryperl.com) to learn linux. So, my question is just about some *nix stuff I don't fully grok.
One of the things I want to do is run the generated script (in my application) as a limited user. So I create a limiteduser . Then I can set say fork limits like this : limiteduser hard nproc 300.
Is this then the right way of running a script as limiteduser :
sudo -u limiteduser perl myscript.pl
Secondly, what if I had concurrent requests that forked and executed and ran the above code simultaneously, with different scripts. Would that cause problems? I heard it would, if two processes ran on the same user or something?
Specifically I'm running on ubuntu but I assume this applies to any *nix distro?
Update: Also, I plan to use Time::Out to timeout a script if it takes too long so ideally I should have a limited number of processes running.
|
First, sudo is a good way to run a script as a limited user.
Second, whether there will be problems running more than one instance of the script at the same time will depend on what the script actually does. It has (usually) nothing to with whether the various instances are run by the same user - instead, you need to consider whether the script e.g. tries to edit a file, in which case two instances might try to edit the same file at the same time, which may leave the file in an unexpected state.
Also, of course if you are limiting how many processes the user is allowed to run, or how much memory it's allowed to consume, then at some point you may run into the limit and the script won't be able to do what you want. But that's a feature of the whole limiting system. (If you don't have limits, then a badly written program may cause your entire system to grind to a halt. But again, that is because of how the program is written, not because of who runs it (except that if you run the program as root then it's a lot easier to break things, which is why it's good that you have the limited user running things).)
| understanding how to run script securely as limited user |
1,425,636,689,000 |
I want to move veryverylongfilename.txt to a filesystem which has a short NAME_MAX.
mv veryverylongfilename.txt /mnt/tiny gives me an ENAMETOOLONG-type error:
mv: cannot stat '/mnt/tiny/veryverylongfilename.txt': File name too long
What command should I use instead, to truncate the filename if necessary?
It would be great if the command could keep the extension. Also, it would be nice to avoid overwriting existing files, for instance when moving veryverylongfilename1.txt then veryverylongfilename2.txt, by using any kind of unique identifier in place of the last few characters before the extension.
|
The following function (tested in bash) will attempt to move its first parameter to its second parameter. It expects (and tests for) the first parameter to be a file and its second to be a directory.
The local "namemax" variable should be adjusted to your filesystem's NAME_MAX.
moveshort() {
local namemax=8
# simple sanity checks
[ "$#" -eq 2 ] || return 1
local src=$1
[ -e "$src" ] || return 2
local dest=$2
[ -d "$dest" ] || return 3
local extension=${src##*.}
local basename=${src%.*}
# the base name has ($namemax - $extension - 1)
# characters available to it (1 for the period)
local maxbase=$((namemax - ${#extension} - 1))
# shorten the name, if necessary
basename=${basename:0:maxbase}
# echo "Shortened name: ${basename}.${extension}"
# find a new name, if necessary
if [ -e "${dest}/${basename}.${extension}" ]
then
local index=1
local lenindex=${#index}
#local newbase=${basename:0:-lenindex}
local newbase=${basename:0:maxbase - lenindex}
# loop as long as a conflicting filename exists and
# we're not out of space in the filename for the index
while [ -e "${dest}/${newbase}${index}.${extension}" -a "${#index}" -lt "$maxbase" ]
do
index=$((index + 1))
lenindex=${#index}
newbase=${newbase:0:maxbase - lenindex}
done
if [ -e "${dest}/${newbase}${index}.${extension}" ]
then
echo "Failed to find a non-colliding new name for $src in $dest" >&2
return 4
fi
basename=${newbase}${index}
# echo "new name = ${basename}.${extension}"
fi
# perform the move
mv -- "$src" "${dest}/${basename}.${extension}"
}
After the sanity-checks, the function saves off the extension and remaining base filename, then determines how many characters are available for the base filename to use.
If the given filename is already too long, then we chop off the extra characters.
If the shortened name already exists in the destination, then we begin looping, starting at 1, generating a new base filename until we run out of space in the base filename or we find a file that doesn't exist. The new base filename gets squeezed by the index as the index grows.
If we run out of space in the filename, the function echoes out an error and returns; otherwise, it attempts to execute the mv.
| Truncate if necessary when moving file to filesystem with shorter NAME_MAX |
1,425,636,689,000 |
I have Ubuntu installed as VM on my laptop. My laptop has quad core with HT technology making it 8 cores.
Within Ubuntu VM I can only use max of 4 cores.
What should I do so I can access all 8?
Which Linux distribution would let me use all my cores? Which VM software should I use?
Any advices?
|
You are hitting a VMware Player limitation.
VMware Player takes advantage of the latest hardware to create virtual machines with up to 4 virtual processors, 2 TB virtual disks and up to 64 GB of memory per virtual machines.
VirtualBox has a much higher limit (32, as far as I can tell).
| How to avoid 4 core limit for Ubuntu within VMWare? |
1,425,636,689,000 |
So I am trying to change my hard limit for file descriptors on Ubuntu18.04 laptop. I have tried everything but the changes have still not taken effect.
I need to run a go program which keeps throwing this error. too many open files
So I made some changes to my /etc/security/limits.conf file according to this blog post https://medium.com/@muhammadtriwibowo/set-permanently-ulimit-n-open-files-in-ubuntu-4d61064429a
These are the contents of limits.conf
* soft nproc 65535
* hard nproc 65535
* soft nofile 65535
* hard nofile 65535
root soft nproc 65535
root hard nproc 65535
root soft nofile 65535
root hard nofile 65535
And I also changed /etc/pam.d/common-session to add the line session required pam_limits.so
The I restarted my terminal and ulimit -Hn still showed 4096 but when I did sudo su and ran the same command it gave me 65535
But since I am not running my go program inside of the su session it still doesn't work out for me, I need to change my actual hard limit to a higher value for all the users not just the super user
What am I doing wrong?
|
You're very close.
Add the user name who needs the values. You just set limits for root, now set it for the user you need.
You can add users groups etc.
From Redhat site:
# vi /etc/security/limits.conf
#<domain> <type> <item> <value>
* - core <value>
* - data <value>
* - priority <value>
* - fsize <value>
* soft sigpending <value> eg:57344
* hard sigpending <value> eg:57444
* - memlock <value>
* - nofile <value> eg:1024
* - msgqueue <value> eg:819200
* - locks <value>
* soft core <value>
* hard nofile <value>
@<group> hard nproc <value>
<user> soft nproc <value>
%<group> hard nproc <value>
<user> hard nproc <value>
@<group> - maxlogins <value>
<user> hard cpu <value>
<user> soft cpu <value>
<user> hard locks <value>
<domain> can be:
a user name
a group name, with @group syntax
the wildcard *, for default entry
the wildcard %, can be also used with %group syntax, for maxlogin limit
Set up the users in limits.conf, log off and back on and see if that helps.
joe soft nproc 65535
joe hard nproc 65535
joe soft nofile 65535
joe hard nofile 65535
| Not able to increase ulimit -Hn, only shows up for a sudo su session |
1,603,462,705,000 |
As we know, cgroups can limit cpu usage of processes. Here is an example:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
30142 root 20 0 104m 2520 1024 R 99.7 0.1 14:38.97 sh
I have a process, whose pid is 30142. I can limit it as below:
mkdir -p /sys/fs/cgroup/memory/foo
echo 1048576 > /cgroup/memory/foo/memory.limit_in_bytes
echo 30142 > /cgroup/memory/foo/tasks
As we see, if I want to limit a process, I have to first execute it and then I could limit it according to its pid. Is it possible to limit a process according to its name? Is it possible to limit a process before executing it?
|
Control groups are pid-based, and there is no direct way of limiting processes by name. (Since control groups are hierarchical, this makes sense: a group also contains its member processes’ future children, by default, and having them re-attach to another group based on their name would be surprising.)
The typical way to use control groups is to attach a parent process to them, and then rely on the fact that children inherit their parent’s group. However there is a tool which will allow you to start a process in a given group, cgexec:
cgexec -g memory:foo yourcommand
On Debian you’ll find this in cgroup-tools.
| cgroups: Is it possible to limit cpu usage by process name instead of by pid |
1,603,462,705,000 |
Is there a way to restrict the application access to the system time in linux?
I want to make the application launch as much as possible abstracted from the environment. If you can restrict access to devices/file system with permissions, it's not clear how to restrict access to the system clock, because this is not a standart system device.
|
If you want to deny access to the system time, you’d pretty much have to write a system call filter; nowadays that would be a seccomp filter. See Kees Cook’s simple tutorial, or for more complex requirements, libseccomp. You’d need to deny access to gettimeofday, clock_gettime, and time, at least; the details depend on whether you’re trying to deal with an adversarial application (where you might also want to deny access to exec, system etc. to prevent an application from running external applications — although seccomp filters are inherited so that might not matter —, and deny access to external, direct or indirect time sources such as the network or even file systems).
If you want to give the application an artificial time, look at faketime and libfaketime.
| Is there a way to restrict the application access to the system time in linux? |
1,603,462,705,000 |
I have a list of files stored in txt file.
./test7
./test4
./test1
./test5
./test6
./test10
./test8
./test2
./test9
./test3
I want to run a command on all those files but I want to sleep 1 second after each two files are processed, eg:
cp ./test1 test1-backup
cp ./test2 test2-backup
sleep 1
cp ./test3 test3-backup
cp ./test4 test4-backup
sleep 1
...
cp ./test9 test9-backup
cp ./test10 test10-backup
Is there a way to achieve this by bash script? I would like to parametrize the amount of commands executed by 1 iteration (calling sleep 1).
Also another problem is that real list of files has hundreds thousands of lines.
|
I assume you're not describing your real-world scenario: not only do you want to copy hundreds of thousands of files, but you want to sleep for hundreds of thousands of seconds... wtf?!?
Anyway:
while IFS= read -r file1
IFS= read -r file2
do
cp "$file1" "${file1##*/}-backup"
cp "$file2" "${file2##*/}-backup"
sleep 1
done < inputFile
| bash execute command for X arguments at a time |
1,603,462,705,000 |
I am trying to determine the total number of concurrent sessions on a server to determine a good number for the maxlogins option in /etc/security/limits.conf. How is the number of sessions calculated in relation to the value set for maxlogins and is there a way to check that number via the command line?
|
limits.conf is used by by pam_limits, so I suspect the answer to "How is the number of session calculated?" is "the number of times the pam_limits module was used as part of a PAM authentication process".
So that means:
The login command
su
sudo
Etc.
Look in /etc/pam.d for an authoritative list of things that include pam_limits in their session setup.
| How to determine "number of concurrent sessions" for all users on a server |
1,603,462,705,000 |
root@Andromeda:/# ulimit -n -S
2048
root@Andromeda:/# ulimit -n -H
2048
root@Andromeda:/# ulimit -n -S 4096
2048
root@Andromeda:/# echo $?
0
Failure to set soft limit above the hard one makes sense.
What perhaps does not make sense is why the exit code of this attempt is 0.
|
It seems that putting -H or -S at the end causes it to report, not set. And therefore no error. The number at the end seems to be ignored. As far as I can tell this should be a usage error, but not a limits error.
| Setting soft limit above hard does not fail in terms of exit code? |
1,603,462,705,000 |
I got
ERR max number of clients reached
from my redis server so I decidied to increase the allowed max client connections in its configuration. This also requires according to the documentation, to increase the respective open file limits for the user.
So I made the following changes:
$ grep maxclient /etc/redis/redis.conf
maxclients 100000
$ grep redis /etc/security/limits.conf
redis - nofile 100000
Then I did systemctl restart redis-server
However, when I check the limits for the redis-server process which is run by the system user redis, the max allowed files report something else:
$ ps -u redis
PID TTY TIME CMD
21168 ? 00:00:22 redis-server
$ grep 'open files' /proc/21168/limits
Max open files 4096 4096 files
Do I need to reboot the machine for the changes to take effect? Or is it something else?
|
/etc/security/limits.conf is the configuration file for the pam_limits PAM module. It only affects users logging in with PAM, not services started in other ways.
You'll need to configure systemd to change the limits on the processes it starts, see e.g. How to set ulimits on service with systemd? on how to do that.
| Open file limits not increased for redis user despite change in /etc/security/limits.conf |
1,603,462,705,000 |
I am running a program written in python that makes heavy computations using theano.
As it is a very CPU-intensive program, it is disrupting all my other activies on my laptop.
For this reason I have been setting the nice level of the process to 19 and have used cpulimit to reduce its CPU usage to 10%.
Unfortunately these attempts were not effective, as the laptop sometimes gets stuck even for minutes.
Do you have any idea on how to tackle with this problem? How can I instruct the scheduler to behave properly?
The laptop is a Samsung Ultrabook (New Series 9) with an Intel Core i5-3317U.
The operating system is Linux, Ubuntu 15.10 with kernel 4.2.0.
EDIT: The problem seems to be caused by trashing (low memory, constantly swapping)
|
This kind of non-responsiveness, although the CPU is limited is often caused by swapping (i.e. your process pushes other tasks out to disk and getting them back in is going to take a lot of time).
The best way to limit your memory usage is normally from within the program. If that is not possible and memory is consumed slowly (because it is not released) it might be necessary to kill the program ever so often and restart. Of course this only works if intermediate results are written on a regular basis.
From outside the program you can limit the amount of memory using the timeout script (this is not the timeout from coreutils!). It has a -m option to limit memory and will kill your process if it starts to consume too much memory.
If you cannot restart processing, then your options are
buying more memory for your machine if it can be installed
installing a SATA SSD if your laptop supports that and put swap on that
rewriting the software to work in smaller chunks
| `cpulimit` and `nice` not effective in limiting the cpu usage of a python program that contains heavvy computations using theano |
1,603,462,705,000 |
This seems like a straight forward question:
Is there a relationship between the groups defined in ldap ldif
files and /etc/security/limits.conf ?
I.e.,
does defining an LDAP
user with ou=student translate to @student effects from
/etc/security/limits.conf?
If not, (how) can it be made to?
|
The groups defined with the @group syntax in the limits.conf file can match groups defined in any group database back-end, i.e. files (/etc/group), nis, ldap, and whatever else nsswitch.conf might support.
Assigning a group to an ldap user entry is not done by locating his/her entry somewhere in the hierarchy (like under ou=student in your question) but by defining group entries and populating them, i.e. adding them as secondary groups to users. That's not really different from what you would do with the local files (/etc/passwd & /etc/group), it's just where the information is stored and how it is done that changes.
| Relationship between limits.conf and LDAP? |
1,603,462,705,000 |
(I posted this question at stackoverflow, but maybe here is the more appropriate place, if necessary - I'll delete the other question.)
I need to put a limit on block IO operations speed for a number of docker containers.
To achieve this, I need to do something like:
docker run -it --device-read-bps /dev/sda:1mb ubuntu,according to the docs.
My question is how do I get the correct device per container to set the limit for? Is there any what to get this info with docker inspect?
docker inspect my_container | grep DeviceName returns nothing?
The output of df -h is:
Filesystem Size Used Avail Use% Mounted on
devtmpfs 7.9G 0 7.9G 0% /dev
tmpfs 7.9G 0 7.9G 0% /dev/shm
tmpfs 7.9G 1.4M 7.9G 1% /run
tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
/dev/mapper/vg0-data 31G 6.9G 23G 24% /
tmpfs 7.9G 0 7.9G 0% /tmp
/dev/sda3 283M 27M 238M 11% /boot
/dev/sda2 10M 2.2M 7.9M 22% /boot/efi
/dev/sdb1 63G 198M 60G 1% /mnt/data
overlay 31G 6.9G 23G 24% /var/lib/docker/overlay2/0c917cf591efb40f75a450b6ad93bf9cf06c91f7f625e1f073887031d152f444/merged
shm 64M 0 64M 0% /var/lib/docker/containers/b21b4b5d27bd57f04204d3a54f11930a532bdc8c56cabfe903f34b955f3c81f1/mounts/shm
overlay 31G 6.9G 23G 24% /var/lib/docker/overlay2/8b198c4829d4eb13c21e7c9d1be99aa00986d64f13f50a454abc539aed37e759/merged
shm 64M 0 64M 0% /var/lib/docker/containers/1667192b0b8026eb517894fdf72f71c6aca5a0ff78648447c12175c96b76990c/mounts/shm
overlay 31G 6.9G 23G 24% /var/lib/docker/overlay2/72efa8bcdec2c529ca3ebde224f8d14e22780636c614fe45a0228eb957a99351/merged
shm 64M 0 64M 0% /var/lib/docker/containers/a7123dfebcc42a675b6ccb0435df1cc24bcd0a39847fb4cb5a3fdcaf2d38089f/mounts/shm
overlay 31G 6.9G 23G 24% /var/lib/docker/overlay2/65ede56c537f5de766f616f13df90aae46f287b9e28703496f90af9f74f4c463/merged
shm 64M 0 64M 0% /var/lib/docker/containers/6a24ef7116078d48fde87fc47a8efd194ef541ffb7d85ae4bec34e5086e46d4b/mounts/shm
overlay 31G 6.9G 23G 24% /var/lib/docker/overlay2/2b444d99740719500285bd94eb389815e631cd183cf3637e64fa40999ccf2530/merged
shm 64M 0 64M 0% /var/lib/docker/containers/8c7300dcd9981878ce46f4b805d65b72bf3afbf87d9510312ba5110b95ae8cf4/mounts/shm
overlay 31G 6.9G 23G 24% /var/lib/docker/overlay2/9ea1ad005bcbcdb0e52d6f2b05568268e3a6e318f8d30986e0fac56523408e89/merged
shm 64M 0 64M 0% /var/lib/docker/containers/5eb12be6805976d230f5ec17bda65100745bebeccea4ab7c2bcf2260405ecb96/mounts/shm
I came across many threads asking this question, like this, but no determined answer was given
|
Docker does not specify a way to discreetly get the block device, which is used for read/write operations.
Successful workaround:
get all the block devices of the OS (OS dependent command).
put the limit on all the devices. No side effects observed.
| How to get the correct device to limit block IO for docker container? |
1,603,462,705,000 |
I have this in my /etc/security/limits.conf:
#<domain> <type> <item> <value>
root - memlock 65536
root - stack 524288
root - nice -20
root - nofile 16384
yet, the process /usr/lib/xorg/Xorg, which does run as root still has only 1024 for RLIMIT_NOFILE:
cat /proc/$(pgrep Xorg)/limits | grep 'open file'
Max open files 1024 4096 files
Why are my settings in /etc/security/limits.conf not reflected in Xorg ?
Where can I increase limits for /usr/lib/xorg/Xorg ?
My system is Debian Buster without systemd (I am using sysvinit). And I am using slim as login manager. So I guess it is slim, that launches Xserver. Below is the pam module used by slim:
cat /etc/pam.d/slim
auth requisite pam_nologin.so
auth required pam_env.so readenv=1
auth required pam_env.so readenv=1 envfile=/etc/default/locale
@include common-auth
@include common-account
session required pam_limits.so
session required pam_loginuid.so
@include common-session
@include common-password
UPDATE
Inspired by suggestion from @ajgringo619, I have added ulimit -n 16384 to /etc/init/slim.conf, the configuration script used by slim, since it is slim that actually starts xserver:
At first, it looked as if that solved the problem, after adding ulimit -n 16384 and restarting slim, the new ulimit takes effect. But the problem is, after restart, it does not. I mean, I have to manually restart slim for the new ulimits to take effects. if slim starts as a niormal init script, it still has the old ulimit value 1024.
|
(Updated to reflect your test)
Since the user that's running the Xorg process is not actually logging in, your settings in /etc/security/limits.conf are being ignored. With sysvinit, you need to add ulimit -n 16384 to the beginning of /etc/init.d/slim (tested on antiX 19), not in slim.conf.
The systemd version of this fix also works; tested on my Debian 10 VM (adding a configuration setting that mimicked the aforementioned ulimit command).
| "Xorg" process does not take limits from /etc/security/limits.conf |
1,489,764,327,000 |
How to setup/configure the Arch Linux bootcd (live-CD, ISO) so I can login to it using an SSH client?
And which password is by default set for the (automatic login) root account?
|
The default root password for the ISO distribution is blank. And by default you are not allowed to login with SSH using a blank password.
Therefore two commands are necessary:
passwd --
To set a non blank password for the currently logged in user ('root' for liveCD). Enter the password twice.
Before september 2021: systemctl start sshd.service --
To start the ssh daemon.
September 2021 and later: sshd is started by default.
Now you can login from your client machine using:
ssh root@ip-address or
ssh -o PreferredAuthentications=keyboard-interactive root@ip-adress in case you have a keypair
PS Don't know the IP address? The live-CD includes commands ifconfig and ip address.
| How to setup SSH access to Arch Linux Iso (livecd) booted computer? |
1,489,764,327,000 |
I have an USB thumb stick with the following partition scheme:
sdb
sdb1 -> root partition '/'
sdb2 -> /home partition
If I boot from a live CD, or just from my current Ubuntu OS located on my computer's hardware, I am able to chroot this way:
mount /dev/sdb1 /mnt
mount /dev/sdb2 /mnt/home
chroot /mnt /bin/bash
In this instance, my prompt is something like [root@archiso]. The thing is I would like to log in my user account which is in /mnt/home/me, but I don't know how to do it?
A further goal would be to launch X (and any dektop manager) from this user session.
So the questions that I need to be addressed are the following:
1) How do I log into my user session once I am chrooted?
2) Once I am logged in, would it be possible to start X startx, even if I am initially logged in my computer Ubunu OS, with a gnome session already running?
|
Do the chroot, as described in the question,
and then do su - fred (or whatever your name is) or exec su - fred.
Do chroot /mnt /bin/su - fred,
so that the su will be the first thing that runs in the chroot environment.
Note that both of the above assume
that your fred user is defined in /mnt/etc/passwd.
OR
Do chroot --userspec=fred:bedrock --groups=group1,group2 /mnt /bin/bash,
to set your user identity from the inception of the chroot.
The chroot invocation manual says,
The user and group name look-up performed by the --userspec
and --groups options, is done both outside and inside the chroot,
with successful look-ups inside the chroot taking precedence.
Consider adding -l (or --login) to tell bash to act
as if it had been invoked as a login shell;
i.e., to tell it to read /etc/profile and ~/.bash_profile, etc.
The question asks,
“How do I log into my user session once I am chrooted?”
Quite possibly the literal answer, chroot /mnt /bin/login, would work.
If this works, it would be the most like an ordinary login,
in that it would ask for a user name and a password.
It is (probably) the only answer (of these four, anyway)
that would cause the who command in the chroot environment
to report your user name (and time of login, etc.) and might be the best bet
to use some of the more esoteric features of the environment.
(I’m thinking [speculating] about cron /crontab / at,
the passwd command, file system quotas, the accounting system, etc.
I don’t know whether or why these, or any other software,
might not be satisfied with just looking at your real UID.)
Of course this, like the first two answers,
would operate on users and passwords defined in the chrooted environment.
This might not work.
I vaguely seem to recall something about some versions of login
detecting that it is being run in a non-standard way (like this)
and refusing to play along. But I can’t recall specifics
and I can’t find anything in two minutes’ worth of searching.
| Login to user's session with chroot |
1,489,764,327,000 |
My question is about Linux in general but lets suppose my ubuntu isn't working property, booting in tty or whatever. I have no internet connection but I have ubuntu live cd. Is it possible to reinstall the desktop environment from live cd?
|
Yes it is. Either by using the CD as a repository, or by booting into the live session and downloading the package manually and then installing from your normal OS or even by setting up a chroot environment. IN the examples below, I am using apt-get xfce as the command you will want to run but dpkg-reconfigure or whatever else would work as well.
1. Use the CD as a repository.
Say that you've screwed up your desktop and are booting to a command line with no internet access (which shouldn't happen, you can have internet even without a GUI). OK, you can put your CD in your drive and then run
sudo apt-cdrom
If all goes well, that should detect your CD, mount it and parse it for packages. Once that's done, run sudo apt-get update to refresh your sources and install your desktop normally. For example: apt-get install xfce4-desktop.
NOTE: I have not tested this but it is relatively well documented. See, for example, here.
2. Boot into the live session and get the packages you want.
This one requires that you actually have a working internet connection in the live CD environment. First, boot into your normal (broken) OS and install apt-offline. If your system is already broken, you can download the package here (make sure you also get the dependencies) and install with
sudo dpkg -i apt-offline_1.3.1_all.deb
Once you have it installed run
sudo apt-offline set xfce-offline.sig --install-packages xfce4
Then, take the file that was just generated (xfce-offline.sig), boot into the live session and run
sudo apt-offline get xfce-offline.sig --no-checksum --bundle xfce-offline.zip
Now, boot back into your local system to install it:
unzip xfce-offline.zip
That should result in a list of .deb files that you can then install manually.
I also found something called keryx which might be worth checking out:
Keryx is a free, open source application for updating Linux. The Keryx Project started as a way for users with dialup, or low-bandwidth internet to be able to download and update packages on their debian based distribution of linux. Mainly built for Ubuntu, Keryx allows users to select packages to install, check for updates, and download these packages onto a USB portable storage device. The packages are saved onto the device and are then taken back to the Linux box that it originated from and are then installed.
Finally, you can also do all this manually with apt-get from the live session:
sudo apt-get update --print-uris -y | sed "s/'//g" | cut -d ' ' -f 1,2 |
while read url target; do wget $url -O ./$target; done
The command above will download all .deb files needed to install xfce. See my answer here for more details on how that works.
References
https://help.ubuntu.com/community/InstallingSoftware#Installing_packages_without_an_Internet_connection
http://ubuntuforums.org/showthread.php?t=1637309&p=10198406#post10198406
3. Use the live CD to set up a chroot environment.
Setting up the chroot is explained in more detail here but the basic procedure is (replace /dev/sda1 with whichever partition has your /):
sudo mkdir /mnt/foo
sudo mount /dev/sda1 /mnt/foo
sudo mount --bind /dev /mnt/foo/dev &&
sudo mount --bind /dev/pts /mnt/foo/dev/pts &&
sudo mount --bind /proc /mnt/foo/proc &&
sudo mount --bind /sys /mnt/foo/sys
sudo chroot /mnt/foo
You have now tricked your system into thinking it is booted into your installed OS and you can use apt-get normally. Once you've finished, exit the chroot with exit and reboot.
| Is it possible to install a linux desktop environment from a live cd? |
1,489,764,327,000 |
I've got a laptop which I haven't used since the last summer vacation: I did put Debian 7 on it and used Debian's feature to fully encrypt the disk, besides a tiny bootloader (or a tiny partition) I guess (not too sure which encryption this is nor how to find out).
I do know the password of the encrypted filesystem so the system boots, but I'm stuck the login prompt: I did forgot my password(s).
Seen that I know the password of the encrypted filesystem, I take it I can boot from a Live CD (or even maybe from the Debian install CD?) and somehow "mount" the encrypted partition.
If that's the case, can someone explain me how to do this? (knowing that I've never mounted an encrypted partition / filesystem manually)
|
Full disk encryption is usually done using the dm-crypt Device Mapper target, with a nested LVM (Logical Volume Manager) inside. So to reset your password you'll have to
Unlock/open the crypto container; this is done using cryptsetup
Activate the logical volumes; vgchange is used for this.
Usually you won't need to care about this. Just let the initrd provided by your distribution do the job but tell it not to start /sbin/init but something else — a shell would be good. Simply append init=/bin/sh to your kernel's command line in your boot loader (with GRUB you could press E with the appropriate boot entry selected to edit the entry).
Then your kernel should boot up normally, booting into the initrd which should ask for your passphrase and set up your file-systems but instead of booting the system up drop you into a shell. There you'll have to
remount / read-write: mount -o rw,remount /
reset your password using passwd <user> (since you're root you won't get prompted for the old one)
remount / read-only: mount -o ro,remount / (skipping this might confuse your init scripts)
Start the regular init with exec /sbin/init (or simply reboot -f).
If this does not work, you'll have to take the approach with greater effort and do it from "outside", a.k.a. booting a Live CD. Usually this should be possible by using the Debian install CD — the tools should be installed, since the installer somehow has to set up encryption which uses the same schema:
Boot a Live CD
Open the encrypted partition by issueing
# cryptsetup luksOpen /dev/<partition> some_name
where <partition> should be your encrypted partitions name (sda2, probably). some_name is just… some name. This will prompt you for the disk's encryption passphrase and create a block device called /dev/mapper/some_name.
Activate the logical volumes. This should usually work by issueing
# vgscan
# vgchange -ay
This will create block device files for every logical volume found in the LVM in /dev/mapper/.
Mount the volume containing your / file system:
# mount /dev/mapper/<vgname>-<lvname> /mnt
where <vgname> and <lvname> are the names of the volume group and the logical volume. This depends on the way distributions set it up, but just have a look into /dev/mapper/, normally names are self-explanatory.
Change your password with passwd <user> accordingly.
| How to reset password on an encrypted fs? |
1,489,764,327,000 |
I have a requirement to boot RHEL 6.6/7.0 into read-only mode with a writable layer only in RAM. I believe this is similar to how live CDs work, in that the file system is read-only, but certain parts of it are writable after being loaded into RAM. Here, any changes written to the file system are lost on reboot (since only RAM is updated in the writable layer).
While looking around the net, I haven't found a guide on configuring my own "live CD" without helper tools so that I can mimic this process in an existing installed system.
Does anyone know where I might be able to get some resources on either building my own live CD or making a read-only Linux with a writable layer only in RAM?
|
OK, so I do have a working read-only system on an SD card that allows the read/write switch to be set to read-only mode. I'm going to answer my own question, since I have a feeling I'll be looking here again for the steps, and hopefully this will help someone else out.
While setting various directories in /etc/fstab as read-only on a Red Hat Enterprise Linux 6.6 system, I found the file /etc/sysconfig/readonly-root. This piqued my interest in what this file was used for, as well as any ancillary information regarding it. In short, this file contains a line that states, "READONLY=no". Changing this line automatically loads most of the root file system as read-only while preserving necessary write operations on various directories (directories and files are loaded as tmpfs). The only changes I had to make were to set /home, /root, and a few other directories as writable through the /etc/rwtab.d directory and modify /etc/fstab to load the root file system as read-only (changed "defaults" to "ro" for root). Once I set "READONLY=yes" in the /etc/sysconfig/readonly-root file, and set my necessary writable directories through /etc/rwtab.d, as well as the fstab change, I was able to get the system to load read-only, but have writable directories loaded into RAM.
For more information, these are the resources that I used:
http://www.redhat.com/archives/rhl-devel-list/2006-April/msg01045.html (specifies how to create files in the /etc/rwtab.d/ directory to load files and directories as writable)
http://fedoraproject.org/wiki/StatelessLinux (more information on readonly-root file and stateless Linux)
http://warewolf.github.io/blog/2013/10/12/setting-up-a-read-only-rootfs-fedora-box/
And, of course, browsing through /etc/rc.d/rc.sysinit shows how files and folders are mounted read-only. The readonly-root file is parsed within the rc.sysinit, for those who are looking for how readonly-root is used in the init process.
Also, I did a quick verification on Red Hat Enterprise Linux 7.0, and this file is still there and works. My test environment was CentOS 6.6 and 7.0 in a virtual machine as well as RHEL 6.6 and 7.0 on a VME single-board computer.
NOTE: Once the root is read-only, no changes can be made to the root system. For example, you cannot use yum to install packages and have them persist upon reboot. Therefore, to break the read-only root, I added a grub line that removes rhgb and quiet (this is only for debugging boot issues, you can leave them if you want), and added "init=/bin/bash". This allowed me to enter into a terminal. Once at the terminal, I typed, "mount - / -oremount,rw" to have the system writable. Once writable, I modified (using vim) /etc/sysconfig/readonly-root to say "READONLY=no" and rebooted the system. This allows me to perform maintenance on the system by turning off read-only. If you are using an SD card like I am, then the read/write switch on the SD card needs to be set to writable.
| Building a Read-Only Linux System With a Writable Layer in RAM |
1,489,764,327,000 |
My students often use the classroom projector screen to give presentations, usually using PowerPoint, but sometimes they show pre-recorded presentations in SMPlayer.
The classroom computer is sufficiently powerful, but as it uses Windows, we ran into many problems which wastes class time. Windows simply has too many problems recognizing USBs, handling viruses, displaying multimedia inside PowerPoint slides, and opening the PowerPoint files created with different versions of PowerPoint. I never have these issues on my Linux desktop computer, so I think if I ran a live Linux distribution from CD or DVD, the students could give their presentations without a glitch.
Is there any live distribution specifically designed or well-suited for presentations? This would, minimally, need to include:
Adobe Acrobat (for PDFs)
LibreOffice of similar software (for PPTs, PPTXs)
Music/video playback software (for as many formats as possible)
Web browser, with Flash plugin
|
PCLinuxOS is another solid choice for use as a presentation distro. It comes with the following applications:
VLC
LibreOffice
Firefox
Flash
PDF Reader
The list goes on and on of what it can do. The download is 1.6GB, and the windowing environment is KDE.
screenshots
Here's some screenshots of it in action, as I put it through it's paces.
main menu
LibreOffice's Impress for presentations
Firefox with Flash plugin
Firefox playing back youtube video
Firefox with built-in PDF Reader & standalone PDF Reader
Opening sample PowerPoint File (.ppt)
Mounting a USB Flash Drive
| What live distribution is well-suited for presentations? [closed] |
1,489,764,327,000 |
I'm using Ubuntu as my primary OS and alternative is Windows 7 for gaming, and another stuffs. I want to have menu to boot some live CD ISO. Is there anyway to make menu entry in Grub2/Burg to boot ISO file like the CD way?
I see there are some ways to make it possible but almost method need specified boot arguments (kernel parameters). But I have mixed kind of Live OS wan to boot up using boot loader included: Linux, Unix, DOS (for recovery purpose)...
I'm looking for more generic way to make it easy to discover and add to the menu config file.
|
I have got a perfect chain loader with SysLinux, Grub4Dos and Grub2, and here is my configs:
Syslinux
LABEL DSL
KERNEL memdisk
INITRD /iso/dsl.iso
APPEND iso raw
LABEL GRUB4DOS
KERNEL /boot/grub.exe
Grub4Dos
title Paragon Partition Manager
map (hd0,0)/iso/paragon-bootable-media.iso (hd32)
map --hook
chainloader (hd32)
boot
title Syslinux
chainloader /boot/syslinux/syslinux.bin
title GRUB2 Chainload
root (hd0,0)
kernel /boot/grub/core.img
boot
Grub2
menuentry "Ubuntu 13.10 Desktop ISO" {
loopback loop /iso/ubuntu-desktop-amd64-13.10.iso
linux (loop)/casper/vmlinuz boot=casper iso-scan/filename=/iso/ubuntu-desktop-amd64-13.10.iso noeject noprompt splash --
initrd (loop)/casper/initrd.lz
}
menuentry "Tinycore ISO" {
loopback loop /iso/tinycore.iso
linux (loop)/boot/bzImage --
initrd (loop)/boot/tinycore.gz
}
menuentry "GRUB4DOS" {
linux16 /boot/grub.exe
}
menuentry "SYSLINUX" {
chainloader=/boot/syslinux/syslinux.bin
}
| How to boot from iso with Grub2/Burg boot loader |
1,489,764,327,000 |
How can I transfer an .iso file to an USB in parallel to the download of this file so that downloaded data gets directly on my USB without passing through my hard drive.
|
I wouldn't try it on a CD (although it might well be that my old buffering fears are outdated), but it works fine on a USB key; for example:
curl -L http://cdimage.debian.org/debian-cd/8.6.0/amd64/iso-cd/debian-8.6.0-amd64-netinst.iso | sudo dd of=/dev/sdf
downloads the current Debian network installer and writes it to the sdf key.
This works because dd reads by default from its standard input, if no if parameter is given. The -L parameter to curl tells it to follow redirections.
In fact there's no need to use dd; as root,
curl -L http://cdimage.debian.org/debian-cd/8.6.0/amd64/iso-cd/debian-8.6.0-amd64-netinst.iso > /dev/sdf
works fine too.
(Make sure you get the device right! You can easily destroy the wrong drive with this kind of command...)
| How to download an ISO and directly create a bootable USB |
1,489,764,327,000 |
For some test I start Ubuntu Live from USB.
I'm trying to use tail command to show debug log, but it doesn't work.
I also test opening two terminals (t1, t2) with this code:
t1:
touch a
t2:
tail -f a
t1:
for i in `seq 1 10`; do echo $i >> a; sleep 1; done
Nothing in t2! What can be the cause?
|
If it's a case of tail not working at all, then it could be because your liveCD is using the overlayfs filesystem, which has a bug regarding notifications of modified files. You could try to move the log to another filesystem, such as /tmp if the application creating the log has an option to do so.
You could also carry out your test in /tmp instead of your homedir.
| tail -f produces no output in Ubuntu live CD |
1,489,764,327,000 |
I have a USB device and i'm trying to create it in a way that it has 2 partitions: one for a live linux disc and the other for document storage.
I created the partitions using gparted and and set a boot flag to the one I want to use as the live disc. Now, I have a usb like this:
Disk /dev/sdc: 14.6 GiB, 15623782400 bytes, 30515200 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xc3072e18
Device Boot Start End Sectors Size Id Type
/dev/sdc1 8439808 30515199 22075392 10.5G 83 Linux
/dev/sdc2 * 51200 8439807 8388608 4G b W95 FAT32
I then used dd to flash an Ubuntu iso to /dev/sdc2
sudo dd if=/dev/shm/ubuntu-17.04-desktop-amd64.iso of=/dev/sdc2 bs=4M
When the disc is flashed onto the usb drive, I try to boot from my laptop and it shows "Operating system not found". When i try to use qemu/kvm, it shows a kernel panic like this:
How would I be able to do this properly?
|
You received the Operating system not found error because by writing the ISO to a disk partition rather than the disk as a whole, you inadvertently did not write a boot loader to the disk's MBR gap. And... apparently the PC doesn't care about the boot flag.
I see two possible solutions, but I must say, I'm really just pulling this out of my [censored].
Partition the disk after dd'ing the ISO
The best part of this solution is that you'll know whether it's feasible real quick.
dd the ISO to the entire USB disk
Check the USB disk for partitions using a partitioning tool. If you see partitions, you can probably add one for your encrypted volume.
Add a bootloader to chainload into the partition.
The idea here is to add a boot loader to the USB disk's MBR gap, and have it chainload whatever boot loader is in the partition. Chainloading basically delegates the boot loader's functionality to another bootloader. I'll direct to you Gentoo's documentation on the topic, considering it's quite thorough.
Other
If the above fail, you can try building your own Ubuntu ISO, adjusting how it boots.
| Use a partitioned live usb |
1,489,764,327,000 |
I wanted to see what files are added on top of ISO 9660 when LiveUSB Linux is running. When booted with persistence upper and work folders are on USB drive clearly seen. I run mount on Linux booted from LiveUSB "usual way" (w/out persistence) and saw / is mounted via overlayfs and upperdir=/cow/upper. But sudo ls /cow gives no such file or directory.
Where is /cow and how to see its contents?
Added 1:
I was able to extract contents of initrd from liveUSB via unmkinitramfs (see https://unix.stackexchange.com/a/495524/446998)
$ find . -type f -exec bash -c 'cat {} | grep "/cow/upper" && ls -l {}' \;
if [ ! -d /cow/upper ]; then
mkdir -p /cow/upper
/cow/lost+found|/cow/upper|/cow/log|/cow/crash|/cow/install-logs-*) continue ;;
mv "$cow_content" /cow/upper
mount -t overlay -o "upperdir=/cow/upper,lowerdir=$mounts,workdir=/cow/work" "/cow" "$rootmnt" || panic "overlay mount failed"
-rw-r--r-- 1 alex alex 33834 Jun 24 2020 ./main/scripts/casper
Next step I envision is to understand how /cow is created because is not see in contents of initrd
|
// Experience based on kubuntu 22 lts livecd
after chroot
the last step of ramdisk (/cdrom/casper/initrd) is
run-init {rootmnt}" "${init}" "$@"
which do something like
chroot {rootmnt}" "${init}" "$@"
after that step it may affect the observation of the original mount point.
before chroot
Fortunately there are ways to pause at an interactive shell before chrooting (press ctrl+d or exit to continue)
kernel boot cmdline args
break=top,premount,mount,mountroot,bottom,init
may do the trick.
//BTW: manjaro 22 only support break=premount (same as break=y)
or break=postmount, not support multiple values compound with ,
also another cmdline args might help
debug or debug=y
which turn on detail log during ramdisk running
// boot args can be edit at grub menu by press e
// those information comes from reading the ramdisk script
in ramdisk
you already unmkinitramfs and found
scripts/casper
which handle casper-rw persistence things
./scripts/casper
setup_overlay() {
image_directory="$1"
rootmnt="$2"
# Mount up the writable layer, if it is persistent then it may well
# tell us what format we should be using.
mkdir -p /cow
cowdevice="tmpfs"
cow_fstype="tmpfs"
cow_mountopt="rw,noatime,mode=755"
# Looking for "$(root_persistence_label)" device or file
if [ -n "${PERSISTENT}" ]; then
cowprobe=$(find_cow_device "$(root_persistence_label)")
if [ -b "${cowprobe}" ]; then
cowdevice=${cowprobe}
cow_fstype=$(get_fstype "${cowprobe}")
cow_mountopt="rw,noatime"
else
[ "$quiet" != "y" ] && log_warning_msg "Unable to find the persistent medium"
fi
fi
mount -t ${cow_fstype} -o ${cow_mountopt} ${cowdevice} /cow || panic "Can not mount $cowdevice on /cow"
code here determine the persistence partition, and mount it
keep /cow after chroot
in ramdisk shell, after /cow /root ready
mkdir /root/_cow
mount -o bind /cow /root/_cow
then after chroot into system, /_cow can access the original outer /cow
| Where is /cow for Linux booted from LiveUSB? |
1,489,764,327,000 |
I'm not too familiar with Linux, but hope it may be ideal for this situation. I'm hoping there is a boot from CD style distro that will be ideal for my daughter to use solely for browsing the internet.
The only odd requirement is that it must be able to save a list of websites to 'favourites' (maybe on USB for example), or as a next best option maybe that list can be hardcoded (such as on a config file) within the distro?
Would this be possible? Can you recommend a distribution?
Thanks all. Many useful suggestions here. Sorry I could only accept one answer
|
Although it might be bloated, you could just use Ubuntu Live (netbook or desktop).
If you copy that to a USB disk via their usb-creator-gtk, you can specify an amount of persistent storage for the user.
If you need to make more modifications to a default install, you can always take a look at this article from lifehacker about customizing a live cd.
| LiveCD web browser distro |
1,489,764,327,000 |
I converted a machine from a single disk to mdadm RAID1. I did this nearly like it's described in Raid1 on a running system.
Before, I tried to make the same changes not within the running system, but in grml. update-grub failed. It complained /dev couldn't be found.
Why is update-grub not possible within a live-cd?
|
You have to bind-mount /dev /proc and maybe /sys to the chroot. You can use grml-chroot which automatically bind these three directories into your chroot.
| update-grub in grml |
1,489,764,327,000 |
After going through the process of installation of a Unix OS (in this case Mageia) on VirtualBox 4.08, the next step is to remove the LiveCD.
Since the LiveCD is virtual, how should one proceed to remove it? Deleting the ISO does not seem like a clean way to do this.
Should the setting be changed in the storage section?
|
On the screenshot you provided, with the Live CD selected, click the little CD icons on the right. That should provide you with a dropdown box from which you can select Remove Disc from Virtual Drive
| How to remove LiveCD after installing OS on VirtualBox? |
1,489,764,327,000 |
I just bought an HP pavillion g6 laptop, with the hope of installing Linux on it. I have now tried both Linux Mint (my first choice) and Ubuntu, and both simply give me a black screen from the moment it begins loading the Live CD. I think it reaches the login screen, I can hear the start-up jingle, but all is just black.
Mint gives an "Automatic boot in 10...9..." screen, then goes black. I can stop the countdown and pick from a few options, I tried the "compatibility mode" but that didn't help. The other options are integrity and memory checks, or to boot from the harddisk.
Ubuntu also shows a brief purple screen, where I can escape and either try it or install it. Given the problem I'm having I don't want to install just yet, so I haven't tried that. Picking "Try Ubuntu" I get a black screen immediately after.
Google turned up a suggestion of pressing CTRL+ALT+F2 after it has finished loading, to get a shell, but that doesn't seem to do anything.
I also searched through the BIOS options and set "Switchable Graphics Mode" to Fixed instead of Dynamic, but that didn't help either (so I've switched it back again).
I'm out of ideas. I'd prefer to get Mint to work, since I'm tired of Ubuntu and want to try out Mint instead.
Update I am able to get it to work by setting the nomodeset boot option, but without that I still get a black screen (I can just barely make out some elements on the screen, but it's very, very dark). I tried installing the proprietary ATI drivers in the Additional Drivers window, but that didn't seem to help, or they weren't installed properly, I can't seem to tell.
|
There's a launchpad bug about
Ubuntu booting with the laptop backlight off; that might be the problem you're seeing.
| Black screen at boot with Mint and Ubuntu live CDs |
1,489,764,327,000 |
According to this post I can:
Start the livedisk again, remount your drives archroot into your root
partition and then install the packages you need and everything should
work.
However, I cannot find any information on how to do it.
Could you please explain me what I should do to remount the installation CD and be able to download packages I want?
|
Thanks to other answers I was able to find this thread: What's the proper way to prepare chroot to recover a broken Linux installation?.
As every step is extensively explained in the provided thread on SuperUser I will only provide a very simple solution to what I was trying to achieve.
This approach is a great way to recover or change certain files from your Arch if for example:
the system is automatically turning off a few seconds after logging in and you want to remove those bad packages and config files.
you want to run sudo pacman -S iw wireless_tools network-tools dialog to get your wifi working after installing Arch.
Here are the steps:
Use a LiveCD which has the same architecture as the system you want to chroot into.
If you need a network connection its time to set it up. You can use the wifi-menu for example.
Now you'll have to enter these commands:
cd /
# I had to change ext3 to ext4.
# Depends on the filesystem one used during installation.
mount -t ext4 /dev/sda1 /mnt
mount -t proc proc /mnt/proc
mount -t sysfs sys /mnt/sys
mount -o bind /dev /mnt/dev
mount -t ext2 /dev/sda2 /mnt/boot
chroot /mnt /bin/bash
Now you are are in a shell and you can do what you want to do.
Cleaning up.
exit
umount /mnt/boot # if you mounted this or any other separate partitions
umount /mnt/{proc,sys,dev}
umount /mnt
The end.
reboot
I strongly advise you to look at the original answer at SuperUser and at this thread which might be handy as well.
| Remount Arch Linux installation CD to download some additional packages |
1,489,764,327,000 |
Puppy Linux has a great feature:
as in wikipedia is mentioned
However, it is possible to save files upon shutdown. This feature allows the user to either save the file to disk (USB, HDD etc.) or even write the file system to the same CD puppy is booted from if "multisession" was used to create the booted CD (on CD-Rs as well as CD-RW) where a CD burner is present.
I would be interested in getting to know if there are any other livecds that also offer this feature?
|
Instead of LiveCD, you can create LiveUSB. It functions just like LiveCD but can store the information persistently in a file system called Casper-rw. This file can reside on hardrive or USB drive itself.
https://wiki.ubuntu.com/LiveUsbPendrivePersistent
http://en.wikipedia.org/wiki/Live_USB
http://www.debuntu.org/how-to-install-ubuntu-linux-on-usb-bar
| Linux Live CDs that are able to save configuration on the boot disk? |
1,489,764,327,000 |
I am trying to edit /boot/loader.conf in a freeBSD. The system was unable to boot because of some errors made to the file.
To rectify removing this errors i have to boot using live CD, mount the /boot partition,edit the file and write the changes. How could i know to locate what to mount , where to mount it and how to get into that slice of drive for editing?
|
I found an answer .
Just to let you know, even using an ubuntu live CD, might just not do it.
I have used this useful link :
# sudo modprobe ufs
# mkdir ~/ufs_mount
# sudo mount -r -t ufs -o ufstype=ufs2 /dev/sdb1 /home/<your_username>/ufs_mount
as home directory name.
sdb1 the drive desired to get into..
However you cannot write into a file in there.
Using -wr instead of -r wont work, you will get an error instead. This should shed more light on that
Now using the live installation disk (or usb) of freeBSD system, will work as follows:
after getting into the live CD make the following
# gpart show -l
you will see many slices, you will recognize the disk of the form adaN, N integer with slices of their indicated volumes, by going to /dev do -ls you will see names of slices , in my case among them was : ada0p2 which is the drive i need to get into.
, go to /tmp makdir there , name it ufs_mount. elsewhere mkdir won't work because you are in live CD and most of the folders are read only.
After that:
mount -wr -t ufs /dev/ada0p2 /tmp/ufs_mount
ada0p2 is the name of the drive in my case.
Use Vi to edit your file, use cat to verify it and you are done.
| mount a drive in freeBSD to edit a file, using live CD |
1,489,764,327,000 |
I'm trying to create a custom Debian liveCD using live-helper, but offline. It was more or less possible with apt-cdrom (using the official DVDs to solve all the dependencies).
I did lh config and then lh build, like I'm supposed to. Problem is it failed and gave me this error:
E: Failed getting release file: http://ftp.de.debian.org/debian/dists/squeeze/Release
I poked around the internet and found an option about --mirror-bootstrap and using it to redirect to http://localhost/debian, but it gives the same error. I even tried fetching the Release file and using a local path (eg /root/debian/dists/squeeze/Release), but it wouldn't recognize it. I seem to need to use a URL, but I can't use a url to redirect to local hard drive.
I did find this though: http://lists.debian.org/debian-live/2007/07/msg00152.html
I didn't find any solution there, but it's the most information I could find.
I can't connect that computer to the internet, what can I do??
|
These instructions assume that you want to create a live disc from just one Debian DVD (or CD). I don't know how to combine different CD/DVD images to be one repository.
Install a web server:
sudo apt-get install cherokee
Create a mount point on the web server path and mount the disc:
sudo mkdir /var/www/squeeze
sudo mount /dev/scd0 /var/www/squeeze
Create a directory which will contain the configs and the live disc and navigate to it:
mkdir /path/to/live-build-dir
cd /path/to/live-build-dir
Run live-build config generator:
lb config --mirror-bootstrap http://localhost/squeeze
Look at lb_config manpage for a myriad other options. Also, ensure to wipe out your config directories if you re-run lb config command. Look at the docs for an explanation.
Build the live disc:
sudo lb build
NOTES:
I've tried with direct file access (file:/path/to/apt-repository), and it doesn't work. Probably a bug.
If you want to build Squeeze images, use the Squeeze version of live-build. The version in Wheezy or Unstable is currently broken, and the developers discourage it's usage, other than for testing. The config formats are not even compatible.
| Using live-helper offline |
1,489,764,327,000 |
I'm building my own UBCD and I'm using dual layer to fit 7.5Gigs
Useful resources for UBCD customization:
The SYSLINUX Project
UBCD Customize
Casper Man Page
Live Boot Man Page
Here's my custom.cfg file.
MENU INCLUDE /ubcd/menus/syslinux/defaults.cfg
UI menu.c32
LABEL -
MENU LABEL ..
CONFIG /ubcd/menus/syslinux/main.cfg
LABEL -
MENU LABEL Caine 5.0 January 17th, 2014
TEXT HELP
Read only system forensics. 64bit system required.
ENDTEXT
LINUX /ubcd/custom/caine/casper/vmlinuz
INITRD /ubcd/custom/caine/casper/initrd.gz
APPEND boot=casper splash
LABEL -
MENU LABEL Deft 8.1 April 10th, 2014
TEXT HELP
Digital Evidence & Forensics Toolkit. 64bit system required.
ENDTEXT
LINUX /ubcd/custom/deft/casper/vmlinuz
INITRD /ubcd/custom/deft/casper/initrd.lz
APPEND file=/ubcd/custom/deft/preseed/lubuntu.seed boot=casper iso-scan/filename=/ubcd/custom/deft.iso splash --
LABEL -
MENU LABEL SpinRite
TEXT HELP
Repair damaged Hard Drives with Steve Gibson's SpinRite.
ENDTEXT
LINUX /boot/syslinux/memdisk
INITRD /ubcd/custom/spinrite.iso
APPEND iso raw
LABEL -
MENU LABEL Tails 1.0.1 June 10th, 2014
TEXT HELP
The Amnesic Incognito Live System. i386
ENDTEXT
LINUX /ubcd/custom/tails/live/vmlinuz
INITRD /ubcd/custom/tails/live/initrd.img
APPEND boot=live config live-media=removable nopersistent noprompt timezone=Etc/UTC block.events_dfl_poll_msecs=1000 splash noautologin module=Tails iso-scan/filename=/ubcd/custom/tails.iso
LABEL -
MENU LABEL Tails (failsafe) 1.0.1 June 10th, 2014
TEXT HELP
The Amnesic Incognito Live System. i386
ENDTEXT
LINUX /ubcd/custom/tails/live/vmlinuz
INITRD /ubcd/custom/tails/live/initrd.img
APPEND boot=live config live-media=removable nopersistent noprompt timezone=Etc/UTC block.events_dfl_poll_msecs=1000 splash noautologin module=Tails noapic noapm nodma nomce nolapic nomodeset nosmp vga=normal iso-scan/filename=/ubcd/custom/tails.iso
LABEL -
MENU LABEL Tails 1.0.1 64bit June 10th, 2014
TEXT HELP
The Amnesic Incognito Live System. amd64
ENDTEXT
LINUX /ubcd/custom/tails/live/vmlinuz2
INITRD /ubcd/custom/tails/live/initrd2.img
APPEND boot=live config live-media=removable nopersistent noprompt timezone=Etc/UTC block.events_dfl_poll_msecs=1000 splash noautologin module=Tails iso-scan/filename=/ubcd/custom/tails.iso
LABEL -
MENU LABEL Tails 1.0.1 64bit (failsafe) June 10th, 2014
TEXT HELP
The Amnesic Incognito Live System. amd64
ENDTEXT
LINUX /ubcd/custom/tails/live/vmlinuz2
INITRD /ubcd/custom/tails/live/initrd2.img
APPEND boot=live config live-media=removable nopersistent noprompt timezone=Etc/UTC block.events_dfl_poll_msecs=1000 splash noautologin module=Tails noapic noapm nodma nomce nolapic nomodeset nosmp vga=normal iso-scan/filename=/ubcd/custom/tails.iso
LABEL -
MENU LABEL Ubuntu Rescue Remix 12.04 April 26th, 2012
TEXT HELP
Ubuntu system rescue utility disc.
ENDTEXT
LINUX /ubcd/custom/urr/casper/vmlinuz
INITRD /ubcd/custom/urr/casper/initrd.gz
APPEND boot=casper iso-scan/filename=/ubcd/custom/urr.iso splash --
SpinRite works and Ubuntu Rescue works with some keyboard recognition error noise (but keyboard entry works fine.) Side note: Ubuntu Rescue also works with the options APPEND iso raw, but then it loads the entire iso image into memory before booting.
Caine, Deft, and Tails all don't find a live image to boot. Caine gets to some sort of sys mem prompt but keyboard input does nothing. Deft and tails get to a similar prompt initramfs. Both without the live image found, one of them doesn't respond to the keyboard and the other doesn't recognize it.
Basically I need to boot with the live images. Here's a tree of the directory structure under /ubcd/custom (with the Caine windows files cut out)
.
├── caine
│ ├── autorun.inf
│ ├── boot.catalog
│ ├── casper
│ │ ├── filesystem.squashfs
│ │ ├── initrd.gz
│ │ └── vmlinuz
│ ├── EFI
│ │ └── BOOT
│ │ ├── BOOTx64.EFI
│ │ └── grubx64.efi
│ ├── isolinux
│ │ ├── isolinux.bin
│ │ ├── isolinux.cfg
│ │ ├── splash.png
│ │ └── vesamenu.c32
│ ├── ldlinux.sys
│ ├── syslinux.cfg
│ └── UFO.dat
├── custom.cfg
├── custom.lst
├── deft
│ ├── casper
│ │ ├── initrd.lz
│ │ └── vmlinuz
│ └── preseed
│ ├── cli.seed
│ └── lubuntu.seed
├── deft.iso
├── spinrite.iso
├── tails
│ └── live
│ ├── initrd2.img
│ ├── initrd.img
│ ├── vmlinuz
│ └── vmlinuz2
├── tails.iso
├── urr
│ └── casper
│ ├── initrd.gz
│ └── vmlinuz
└── urr.iso
I extracted out the vmlinuz and initrd files from the ISOs but tried to keep and mount the existing ISOs just like the working Ubuntu example.
So the lines in the config are the LINUX/INITRD/APPEND lines for Caine, Deft, and Tails.
|
For Tails
pass the argument findiso to kernel as
findiso=/path/to/ISO boot=live config live-media=removable nopersistent noprompt quiet timezone=Etc/UTC block.events_dfl_poll_msecs=1000 splash nox11autologin module=Tails quiet
update
If you extract the content of ISO's to respective folders then they can be booted with the boot argument live-media-path.
Assuming ISO's are unpacked to /multiboot/OSname, where OSname is the name of the corresponding OS as given below. The following code is used by YUMI
# Simple Menu Created by Lance http://www.pendrivelinux.com for YUMI - (Your USB Multiboot Installer)
caine
label live
menu label live - boot the Live System
kernel /multiboot/caine/casper/vmlinuz
append cdrom-detect/try-usb=true noprompt live-media-path=/multiboot/caine/casper/ file=/cdrom/preseed/custom.seed boot=casper initrd=/multiboot/caine/casper/initrd.gz quiet splash --
deft
menu label ^DEFT Linux LIVE
kernel /multiboot/deft/casper/vmlinuz
append cdrom-detect/try-usb=true noprompt floppy.allowed_drive_mask=0 ignore_uuid live-media-path=/multiboot/deft/casper file=/multiboot/deft/cdrom/preseed/lubuntu.seed boot=casper initrd=/multiboot/deft/casper/initrd.lz --
tails
menu label ^Run T(A)ILS (Anonymous Browsing)
kernel /multiboot/tails/live/vmlinuz
append timezone=America/Detroit initrd=/multiboot/tails/live/initrd.img boot=live config live-media=removable live-media-path=/multiboot/tails/live nopersistent noprompt quiet block.events_dfl_poll_msecs=1000 splash nox11autologin quiet
code
| Live Distros: UBCD boot Deft, Caine, and Tails from Custom Menu |
1,489,764,327,000 |
The Windows partition on my laptop seems to refuse to boot, and since I lack the original install CD, I was thinking of installing my preferred Linux distro, Arch on it.
I want to recover data off the Windows hard drive (and onto my external) first before installing Arch, so I intend to use a Live CD to do so. Both the Windows HD and my external HD are formatted using NTFS.
However, the display requires manual configuration in order to use X, which precludes the use of a Live CD that will boot directly into X.
What Live CD distros are available that support reading and writing to NTFS (preferably using ntfs-3g) and boot into text mode?
|
I use SystemRescueCd. It boots to a bash shell (where you can startx if you want) and can mount ntfs drives using ntfs-3g.
It also includes a lot of rescue tools.
| A Live CD distro for getting data off a NTFS partition without X Windows |
1,489,764,327,000 |
I'm looking for a live CD distribution that offers me fully featured environment for using Internet and OpenOffice (or some other office app compatible with M$ Word).
I thought about installing a mainstream distro like Ubuntu on USB Flash drive and adding the software myself, but I would have to get a fast 8 GB USB stick and I would lose the added security of CD/DVD -- that is the resistance to rootkits and the guarantee that I won't break anything on my system.
|
Linux Mint seems to be an exact match to what you're looking for! It includes allmost everything you need. OpenOffice, codecs, Firefox, jockey for easy installation of drivers (if needed), XChat, Pidgin, VLC, Transmission (BitTorrent client), Java, ... etc. I've been using it for about a year now, and it hasn't let me down since that time. One thing I would advice you though, is to just use Linux Mint 10, this is not the latest and greatest release, but Linux Mint 11 seems to have some upstream issues, as it's based on Ubuntu, and Ubuntu is trowing everything around.
It's a Live DVD, but there's also a CD version, but it doesn't include some programs and codecs.
| Live Linux CD/DVD with Bittorrent client, Java, Flash, VLC? |
1,489,764,327,000 |
I am having trouble figuring out how to have grub installed on a floppy in a way that it automatically boots a modified Ubuntu 12.04 CD on startup.
I will settle for knowing some commands at the grub prompt
if automation is asking for the impossible.
The CD is bootable, but the system this is made for doesn't have a BIOS option to boot from CD (or USB), it can only boot from floppy or hard drive.
Background:
This is a system located remotely, and I would like to have something where I can tell the owner if there are problems: insert floppy and CD and reboot. The modifications to the CDs are such that openssh-server is installed, my public ssh key in /root/.ssh/authorized_keys2, ssh is listening on an additional port number as those < 1024 are blocked by the local provider and the system retrieves a page on my server (so I can find the IP address to connect to for remote maintenance). The CD works fine when testing in a VirtualMachine. There is a keyboard and monitor and I can ask the owner to type in a few commands.
I first looked at using grub2 but there are many incorrect how-to about them and the command-line options for grub2 seem to have been changed a lot of times (--diet and --overlay, often mentioned, are no longer there).
There is a bug report about grub2 output not fitting on floppy and that was closed recently. So I build grub2 from the repository (version 2.0) including the required new version of xorriso. The result of
grub-mkrescue --compress=xz -o grub-rescue.vfd
is a 4.4Mb Image, which of course does not fit on a floppy at all, so I dropped that as a viable path to explore.
I have tried a grub legacy on floppy (0.97), but cannot use find from the grub prompt to find anything on the CD, nor use something chainloader (hd1). The grub (0.9x) manual has nothing to say about booting an iso image.
I rather not install something on the hard drive and go the boot route FD -> HD -> CD as this whole setup is needed in the first place if the hard drive has problems.
|
An better alternative to all in one boot floppy is probably to use BCDL. The bootable CD loader automaticaly boots the first CDROM. The problem is that its CD driver is no longer up to date, so you need to upgrade VIDE-CDD.SYS on the floppy with e.g. XCDROM.SYS taken from here.
(Only tried with a virtual machine, not with a real FDD).
| grub on floppy to rescue CD boot chain |
1,489,764,327,000 |
While going through the man page, I found some options that seem to suggest that the command line tool livecd-iso-to-disk is capable of creating a multiple-boot USB:
--multi
Used when installing multiple image copies to signal configuration of
the boot files for the image in the --livedir <dir> parameter.
--livedir <dir>
Used with multiple image installations to designate the directory <dir>
for the particular image.
I have a 32 bit ISO image file and a 64 bit one in the directory /var/Installers/Fedora-20. I would like to be able to select either one from the grub menu when booting from the USB. I tried running the following:
# livecd-iso-to-disk --efi --multi --livedir /var/Installers/Fedora-20 /dev/sdc1
But it didn't work because the <source> argument is missing. I have two ISO images, it seems counter-intuitive to ask for the <source> when the directory to the image files is provided. Am I missing something here?
|
I am not familiar with this tool but from looking at the source for the livecd-iso-to-disk.sh script here, I think you've got this backwards. You still need to provide a single source (not a directory) because this tool can only do one ISO at a time, so you need to run it once for every ISO you want to add. Meanwhile, --livedir is supposed to be the name for the destination directory. This is so the tool does not use the default directory and clobber the last ISO you installed.
If I had to guess as to the correct usage based on what I have read, I would try
livecd-iso-to-disk --efi --multi --livedir <name_for_32_bit_dir> /var/Installers/Fedora-20/<name_of_32_bit.iso> /dev/sdc1
livecd-iso-to-disk --efi --multi --livedir <name_for_64_bit_dir> /var/Installers/Fedora-20/<name_of_64_bit.iso> /dev/sdc1
More information: https://fedoraproject.org/wiki/How_to_create_and_use_Live_USB#litd
Notice how the description of --livedir says "for the particular image", which implies singular, not a directory of multiple images.
| How can I create a multi-boot USB using livecd-tools? |
1,489,764,327,000 |
I was reading this article - how do I Access or mount windows NTFS partition in Linux that mentions:
NTFS3G is an open source cross-platform, stable, GPL licensed, POSIX, NTFS R/W driver used in Linux. It provides safe handling of Windows NTFS file systems viz create, remove, rename, move files, directories, hard links, etc.
So, is it compulsory to have NTFS3G on a Live Linux CD so that when I am moving my files from one NTFS partitions to another NTFS partitions of a disk will ensure that it will not corrupt the files in the NTFS partitions?
Or in another words, does a Live Linux CD or DVD in general (without NTFS3G) provide safe handling of Windows NTFS file operations (such as moving files)?
Also does it apply on a certain version of NTFS too?
|
The original code in Linux for NTFS partitions could change an NTFS partition, but required you to do a disk check after rebooting into Windows NT.
I am not sure when this was, it might have been those in last millenium with SuSE 4. And not working from a live CD, but from a dual boot machine.
That changed with NTFS3G, where this is no longer necessary (praise the coders), hence the explicit mentioning of safe handling of NTFS file systems.
I am not sure, but I don't think live CDs were common before NTFS3G became mainstream, so I don't think you will find any that would corrupt NTFS to require a disk check. Any Live CD from 2008 onwards should probably be ok. (Question is why not take a recent Live CD to work with).
| Does Live Linux CD in general have safe handling of Windows NTFS files |
1,489,764,327,000 |
I'm not an expert on computers in general or Linux in particular, so if I'm vague on something please let me know and I'll try to elaborate.
I have an old computer running Red Hat 5.0. It had a Windows dual-boot (98 or XP - I forget which), but I never used it much and a few years ago when it said it was corrupted I threw up my hands and said good riddance. However, I believe it also had some version of Fedora and maybe something else. I say maybe, because I seem to be locked out of the boot menu.
After it boots up, it hangs for 4 seconds saying something like, "Press any key for options." Ordinarily, I'd be able to press a key and choose my OS. Today, however, I pushed a key and nothing happened. It did its typical sequence 3, 2, 1, then booted normally.
I tried rebooting a couple times, and the same thing happened. My questions are these:
Is there something I've screwed up somewhere that's causing this? Or is it probably just something to do with an ancient machine? Is there some sort of troubleshooting I can do to check for corruption?
Is there a way to restart my computer and boot it from a CD or DVD directly, e.g. terminal command? That's what I was trying to do originally when I found the problem.
Thanks!
|
You can reboot the computer with a terminal command, but you can't give it a terminal command that tells it what device to reboot into. Once the machine reboots control is passed to the BIOS, which then decides what device to boot from.
Some BIOSes will automatically offer to boot from a bootable CD/DVD if it detects one, but not all.
So when the machine starts (or restarts) you need to press whatever key your BIOS recognises as the BIOS boot menu key, if it has one. Failing that, you need to press the key which lets you get into the BIOS setup so you can select the bootable devices and the boot order. It's a Good Idea to make a CD / DVD drive the first bootable device.
To be somewhat vaguely on-topic, I guess I should've mentioned the terminal command(s) used to reboot. :) Check out the man pages for shutdown and halt, the halt man page also mentions its synonyms reboot and poweroff.
...
I suppose corruption on your hard drive(s) could be stopping you from booting the various bootable partitions on the system, but at this stage I'm more inclined to think the problem is BIOS-related.
If you haven't done so already, take the machine apart and give it a good clean. Remove the RAM cards from their slots and make sure there's no corrosion on the connectors - a pencil eraser can be used to remove minor corrosion spots. A tiny bit of solvent (like rubbing alcohol) on a cotton bud (Q Tip) may be necessary for more stubborn spots. Do the same with any other removable cards. As I mentioned in the comments, it's probably a good idea to replace the CMOS battery.
To test that your RAM is healthy, run memtest (aka memtest86 or memtst86). It's probably already installed, and is generally included on any Linux live CD /DVD (maybe in the boot/ directory).
If you suspect there's a problem with your hard drive partitions, run fsck on them. And you may also like to use the badblocks program. See their man pages for details, but if anything is unclear, please ask.
| How do I boot my computer from a Live-DVD from the terminal? |
1,425,182,803,000 |
What can I use to create a backup image of my entire system that will be saved on a LAN computer via SSH? If I break anything later, I want to be able to restore my entire system as it was before the backup in minutes. Is there a Live CD that can "save backup image to ssh://..." and "restore from backup image ssh://..."?
|
Clonezilla would be a suitable product for a whole-disk image. It works in a fashion similar to Ghost.
| How do I backup everything? |
1,425,182,803,000 |
I have a Debian (Wheezy) system that I have configured by installing/removing packages and editing some conf files.
I would like to distribute an (almost) exact replica of my system to others programmer on my team. My first instinct is to create an iso but I'm willing to listen to other suggestions.
What's the easiest way to create an installable image of my system to replicate across several computers?
Answers I'm looking for
On one hand, I would like to know if there're apps that do this automatically, but I think it'd be more interesting to have a step-by-step explanation on how to do this using native Unix tools (cp? dd? cat? fdisk?)
Making the iso bootable as LiveCD would be cool addition, but isn't really an important step.
Bonus points if the solution is not limited to Debian only.
What I found so far
remastersys claims to do this, but it looks a bit outdated and unmaintained.
|
what I do to distribute systems easily is create an image (using clonezilla over PXE and samba / nfs storage) and "cast" these images to different computers. This way I can rapidly restore images of my distributions. This is usefull if the hardware is quite the same.
There is also an option to alter live-cd's. You can read more about this here. This is however very time-consuming per live-cd.
Another option is having a look at software like Puppet. Puppet can push certain packages / configurations to a variety of operation systems. One can simply install f.e. debian, tell puppet to add this to the "webserver" (or script this process) group and puppet will push the installation of apache and other defined packages with pre-created configuration files etc.
If you create a clone using clonezilla you have the disadvantage that you have to alter some settings (f.e. ip addresses, /etc/hosts,...)
Puppet has the disadvantage that it takes some time to set up and configure but it is much more powerfull.
If you want I can give you a pdf wich explains how to setup up puppet and how to configure is (the basics).
| How to distribute my debian system? |
1,425,182,803,000 |
I tried out the linuX-gamers live DVD and like it so much that I want to have it as the main operating system on my desktop. The FAQ says:
Can I install or copy the medium to my hard disk?
No, the software is only designed to boot from a live medium.
However, as the live DVD is said to be based on Arch Linux I think it is pretty much possible to put it onto the hard drive. A painful way to do this would be to install Arch and try to make it look like this DVD. The download page says that the ISO is isohybrid, I don't know if it makes any difference.
Is there a (reliable) way to turn an ISO like that into a working installation? I wouldn't mind spending a few days to mess with it.
|
As of today I have successfully installed this distribution and can use it as if it were Arch :) Below is the simplest way to do so:
Install Arch on the hard drive
Remove everything in / (in the local disk), except for /boot
Mount the root-image.sqfs image in the linuX-gamers live DVD and copy everything inside to /
Repeat the previous step with the overlay.sqfs image
Step 2, 3, 4 may have to be performed with a live CD. Further customization is needed but the system can boot and function correctly after step 4.
This answer is really specific to this live DVD and thus not applicable to other live CD/DVD.
| How to install from a Linux live CD that does not support installing? |
1,425,182,803,000 |
I would like to make a bootable USB/Floopy/LiveCD with linux kernel and Grub.
After booting to that USB/Floopy/LiveCD using VirtualBox or directly, it will show my own customized Grub screen and then it will execute my C or Pascal application.
I was trying to download grub but I am not sure which one I should use. Is there any issue to download the correct version of Grub such as for 32-Bit or 64-Bit downloads are different?
Which Grub should I download to get started with my own customized bootable image?
|
There are only two versions of grub listed there, the 1x series (most recent being 0.97) and the 2x series (most recent being 1.99). Both can be customized and used for your purpose. The 1x series has more standard compatibility with old hardware and distros, but we the 2x series is coming along nicly and many major distros are switching to it. 32bit vs 64 bit architecture is not a consideration for grub at this stage of the boot process, that won't come into play until you launch a kernel. Since grub doesn't do much it's happy to run on a generic set of cpu instructions.
But really you shouldn't be starting with grub and working up form there ... that will be a long road. You should probably start with some already arranged livecd image and work backwards to pare it down to just run your program on boot. This will save you all kinds of trouble. Pick some lightweight livecd that you like and get it's source, then start stripping out the bits you don't need and adding your program.
| Which Grub to use for a custom portable boot image? |
1,425,182,803,000 |
How will you chroot into your Linux system using a live disc?
|
You can simply mount the filesystem :
mount /dev/sdXY /mnt
Then :
chroot /mnt
Before you chroot, there are a few other things you may want to do. For example, if you want to install programs, etc then you'll need to set up name resolution and such. Here's part of a "how to make a custom live dvd" howto that explains the actual commands - skip down to the end for how to clean up and exit the chroot and umount what is needed...
https://help.ubuntu.com/community/LiveCDCustomization#Prepare_and_chroot
| Chroot live disk |
1,425,182,803,000 |
I want to take my old notebook while travelling. I have to boot up only from usb or cd though. I want to connect to the internet using usb tethering from my android phone (HTC Desire + cyanogen mod 7.1)
If connect my android phone to my Windows 7 computer via usb cable and turn usb tethering on Windows does the rest and I am connect to the Internet.
Can I be autoconnected (usb tethering preferably) to the internet using any live usb/cd linux/unix distro? Which one?
I'll be creating the usb from Windows7.
|
NetworkManager can connect you automatically if it's configured to do so. And it comes with most modern distros, such as Fedora or Ubuntu. I recommend using live USB so that you can retain the configuration between boots.
| What live distro can automatically accept usb tethering from android phone? |
1,425,182,803,000 |
Is there any way of running an OS in a smartphone without installing it on the device? That would be the equivalent of a live-CD for a PC. I intend to test several BSD distributions on a phone, that could be an Android or Windows.
|
You would need a BIOS or EFI to set from which device the phone should start up. Then you could use the external SD card as Live CD, or maybe even the USB port.
But do we have something like BIOS for our phones? I haven't heard anything abou this. Of course it has it, but how can you activate it?
When your (Android) phone is rooted, there are these tools that can install a new kernel. These are closest to this, but I don't have a rooted phone at the moment, and can't tell you much more about this.
| The equivalent of a live-CD for a smartphone |
1,425,182,803,000 |
I'm running a live distro in ram. I need to write a password to a file. Later, I need to delete it securely.
I don't know how live file systems work, so I'm unsure if I open the file for writing, then write over it with data the length of the password, if it actually will write over that exact memory location or not. The goal is to securely delete the file by writing over it.
I don't know if journaling happens in Live Linux distos. If so, is it possible to create a virtual file system in ram? I could then just write the password there and dd will securely write from beginnig to end of the volume.
What are my options?
|
Journalling depends on the filesystem being used, but if you're using a live Linux distribution, there usually isn't any persistence by default (with some exceptions). If your filesystem is journalling (check /proc/mounts to find out which filesystem is being used) I would not rely on anything to try and "securely delete" a file (unless that filesystem is entirely in memory, in which case the data will be lost shortly after the RAM loses power).
If you want to be safe, mount a ramfs (not tmpfs as this may swap!) filesystem and write it there. The data will be lost entirely once the RAM has lost charge (if you're paranoid, leave it unpowered for a minute or so).
| Securely Deleting A File In Ram |
1,425,182,803,000 |
Most of the live CD systems I have come across don't seem to have the LFS requirements pre-installed (i.e. everything on this list).
Are there any live CD systems that come with the required software for LFS 7.0 preinstalled?
I have used the LFS liveCD in the past, but it looks like it is too out of date to be used with book 7.0.
|
You could build your own. While this clearly seems to be a chicken-and-egg problem, I just had a deeper look at SUSE Studio and it could be of great use here. Just login/create an account, choose a base template (say, "minimal X"), add software, choose "Live CD" in the Build tab.
Since all OpenSUSE repositories are available, you should find everything that's needed. Plus, when your finished, you can Share the image as "OpenSUSE LiveCD that supports everything needed for LFS 7.0"
| LiveCD for LFS 7.0 |
1,425,182,803,000 |
I want to be able to use a CD/USB bootable "live" linux distribution to:
read a truecrypt volume
mount local drives
Ubuntu Privacy Remix seemed perfect as it does include TrueCrypt, but it explicitly cannot see local drives (kernel source modified, as discussed here).
I want to use this live CD/USB distribution for backup/data recovery purposes, as in this question. For example, I might want to use truecrypt to decrypt the local drive and back it up unencrypted. Or, I might want to back up an unencrypted local drive to a truecrypt encrypted backup drive. Or both.
Yes, it really really must be truecrypt (or something that can safely and reliably read/write all valid truecrypt partitions...).
I'm aware that it is possible to boot one of the many live distros that doesn't have truecrypt, plug in a usb device with truecrypt on it, install truecrypt, and then use it for the above purposes. But that's painful and inelegant. Hence my question.
edit I have tried to use cryptsetup, as described in the answer from Xen2050. Cryptsetup has problems mounting some truecrypt partitions. So the question still stands.
|
Any live distribution with cryptsetup should be able to read truecrypt volumes, and I thought they all could mount local drives (apparently you found one that can't).
I know Linux Mint, Ubuntu, Debian, CrunchBang can, probably any Debian-derived distro, or Arch, or Red Hat, I think they all can install cryptsetup one way or the other.
FYI, from cryptsetup's help:
open --type tcrypt <device> <name>
tcryptOpen <device> <name> (old syntax)
Opens the TCRYPT (a TrueCrypt-compatible) <device> and sets up a
mapping <name>.
<options> can be [--key-file, --tcrypt-hidden, --tcrypt-system,
--readonly, --test-passphrase].
The keyfile parameter allows combination of file content with
the passphrase and can be repeated. Note that using keyfiles is
compatible with TCRYPT and is different from LUKS keyfile logic.
| Live linux distribution that includes TrueCrypt, to be used for data recovery [closed] |
1,425,182,803,000 |
I have tried to install some packages (apache mariaDB) and something went wrong from my
sudo apt install apache2 mariadb-server apt-transport-https
At the end there was an error
Checking init scripts...
Unpacking libc6:armhf (2.32-4+rpi1) over (2.29-2+rpi1) ...
Setting up libc6:armhf (2.32-4+rpi1) ...
/usr/bin/perl: error while loading shared libraries: libcrypt.so.1: cannot open shared object file: No such file or directory
dpkg: error processing package libc6:armhf (--configure):
installed libc6:armhf package post-installation script subprocess returned error exit status 127
Errors were encountered while processing:
libc6:armhf
Error: Timeout was reached
E: Sub-process /usr/bin/dpkg returned an error code (1)
You have new mail in /var/mail/pi
Which seems to be a known bug. People back then (March 2021) was updating to gblic2.30 and libcrypt around 4.4.10, I was updating to libc62.32 and libcrypt1:4.4.27, so I do not know why the bug is still around (!)
My understanding from the bug is that I need to put a link or a file where this library is expected, e.g.
ln -s /usr/lib/arm-linux-gnueabihf/libcrypto.so.1.1 /lib/libcrypto.so.1
Only problem is that I need to become root to do that, and sudo or su are just impossible right now, even login from ssh is impossible!
I read that the only hope here is to create the file or symlink either booting the machine from a liveCD/USB, or putting the SD card holding the root file-system and edit that part by hand.
My questions is:
before I power off this raspberry pi and take out the SD card to create the link by hand, while I still can use the terminal from which I am logged on, is there any repair I can attempt?
I stress the fact that I cannot sudo ...
|
There is no way to create the link if you cannot become root.
I think you have to use a Linux system where you can work as root, insert the SD card and create the link.
Note that the unmodified ln -s ... command would create the link in the directory of your running system, not on the SD card. You would have to use something like
ln -s /usr/lib/arm-linux-gnueabihf/libcrypto.so.1.1 /path/to/the/sd-card/lib/libcrypto.so.1
or
cd /path/to/the/sd-card/lib && ln -s /usr/lib/arm-linux-gnueabihf/libcrypto.so.1.1 libcrypto.so.1
Check that the shared library /usr/lib/arm-linux-gnueabihf/libcrypto.so.1.1 exists on the SD card. Otherwise you might have to find the correct name and/or location.
Note: This answer covers only the problem how to create the link if you don't have root access. I don't know if this will fix all problems. You might have to fix broken packages or incomplete package installations after creating the link and booting your Raspberry Pi.
| After libc6 upgrade `sudo: account validation failure, is your account locked?` - What can I do before I switch off and repair from live CD |
1,425,182,803,000 |
I'd like to build a Debian Live system based on sid with an encrypted root filesystem. It's quite easy to install a Debian system with a rootfs encrypted with dm-crypt/LUKS, so I suppose it's also doable on a live system.
In Debian Live 2.x, lh_config used to support an --encryption parameter which would do just that, but it's been removed in version 3.x, presumably because loop-aes has more-or-less been dropped from more recent version of Debian.
I'm looking to do this because I'm booting the live system from PXE (with the help the fetch boot parameter) and would like to make sure it's used only by IT staff, as it contains sensitive information.
As an alternative, I've looked into using an encrypted persistence image, which seems to be better supported in Debian Live, but couldn't figure out how I could load it remotely instead of being looked for on local media.
|
I've sent a patch for version 4.x to the Debian Live developpers as a starting point to implement this.
| Debian Live 3.x with encrypted root filesystem |
1,425,182,803,000 |
I have attempted to make Tiny Core Linux, Archboot (didn't get very far), and SliTaz remastered live CD's with lsdvd included, in order to create a lightweight transcoding solution that allows as much of the processing to be on the transcoding as I can manage. Additionally, I opted for these RAM distributions so that I would be able to swap the live CD's out for a DVD without problems.
I have two Virtual Machines set up, one for Tiny Core Linux and the other for SliTaz. Within the respective operating systems, lsdvd seems to work just fine (I installed libdvdcss and libdvdread on both).
On each, I remastered live CD's so that all three of these packages are installed, and they both seem to behave in a similar way. That is, although they work on the installed OS's, they bring up similar errors when in a live CD environment.
Here is the the error output for each (this occurs after the version of libdvdcss and before the DVD Title Table are displayed):
Tiny Core Linux:
libdvdread: Can't seek to block 100301
libdvdread: Can't seek to block 100301
libdvdread: Can't seek to block 4096128
libdvdread: Can't seek to block 4096128
SliTaz:
hdc: command error: status=0x41 { DriveReady Error }
hdc: command error: status=0x50 { LastFailedSense=0x05}
hdc: possibly failed opcode: 0xa0
What interests me is that the problem seems to be distribution-independant. Is there something that I have on my installed VM's that I should be including in order to mitigate this error? In researching Google, I found that setting a region might help, but I am unsure how I would go about doing that in a portable way.
If there is a simpler way to go about what I am trying to make than how I am making it, I would be grateful if you could let me in on it! Learning the remastering processes for these different systems is intuitive, but it does take some time.
|
It turned out to be an issue with VirtualBox's "Passthrough" feature for the Host IDE Disc Drive. Without it, lsdvd cannot fully function.
| Can't Get "lsdvd" To Work On Remastered Live CD's |
1,425,182,803,000 |
For some reason on my office machine every Linux Live-CD either takes forever to boot or goes to a terminal interface instead of the expected GUI. I used to boot to an Ubuntu 10.10 Live-CD and Live-Thumb-Drive on a regular basis. Today I tried the Live-CDs for GParted (got a terminal interface), Lubuntu (the latest version - I think 11.04?) (got a terminal interface), and Ubuntu 10.10 desktop (Took 20 minutes to get to Try or Install - And then went nowhere after I clicked "Try" and left it alone for an hour.
I do not know what is going on. I do have a striped drive set for Windows XP but in the past Linux still booted I just could not mount those two drives.
All of these live CDs boot normally in a VM inside Windows XP.
|
While it's not entirely clear from your question, I assume that the same media you've previously used to boot the machine now show problems, without any changes in hardware configuration. If that's correct, it really looks like a flaky optical drive. Try to look at your log files -- most CD-related problems I've seen leave log entries, mostly hinting to some unreadable blocks on the medium.
If there's nothing there, try again with the USB thumb drive. If that does also not work, you should suspect your hardware and check for bad contacts of the RAM modules, PCI/PCIe cards and so on.
By the way: The fact that the machine can run Windows but can't boot Linux does not mean that everything is in perfect working order, hardware-wise. I've had cases like that which I finally attributed to Windows using different commands for the initialization of, say, the PCI bus controller. YMMV.
| Live-CDs no longer boot as expected |
1,425,182,803,000 |
The other day I burned a Chakra Linux iso on a dvd. When I booted into a live session I was able to access all my regular data. In dolphin when clicking on the icon belonging to my /home/user partition a pop-up asked me for my sudo password. The password I entered was not my regular sudo password, but the password that belongs to the live dvd. By default, this password is always root. Using the default password I was granted access to all partitions where I have stored my data, installed Linux Mint and installed Windows 7.
I consider this to be a serious security breach. What is the point of having a password-protected account when all data can be accessed without the password by using a live dvd? Is this behaviour normal, or is something messed up on my system or is there something wrong with the Chakra distro?
|
If an attacker can boot a live CD in your environment, your environment is not secure. This is one of the reasons why physical security is so important.
As a general rule, physical access to the machine is all that's ever needed to compromise it. Unix permissions are enforced by the kernel. If you run a live CD and are root, there's no real difference than being root on your own machine; there's nothing special about your environment.
This is expected, and is normal. This is why data encryption is necessary if you want to hide your data from someone who can physically access your machine -- Unix permissions only protect you against an attacker attacking a running machine with a known permissions model, and known constraints (passwords, keys, etc) to stop an attacker from getting those permissions. If the attacker can manipulate the machine, boot into single user mode, boot a live environment, etc, then there's nothing you can do other than encrypt your data.
Encryption is a good first step, but at every step (when you are entering your authentication to decrypt the volume, etc) you are trusting the computer not to lie to you. You cannot trust the computer if you do not at least have good physical security.
Unix permissions require enforcement, and if your environment changes, that enforcement can become unexpectedly weaker.
| Why can I access all my files without a password when booting with a Chakra Linux live CD? |
1,425,182,803,000 |
When running from a live DVD, something highly annoying is:
I click on something. then I hear the DVD drive starting to spin.Now, I know that I need to wait five seconds until the computer reacts for the next time.
How can I tell the DVD drive to keep spinning for 20 minutes instead of 1 minute?
I have already set the reading speed to the minimum constant angular velocity (CAV) setting.
|
As mentioned by sourcejedi, you can use sdparm to tweak the Power Condition page entries.
To see the current values, run sdparm -p po /dev/sr0 (or whatever your drive is). This will show the current timeouts (ICT and SCT; the IDLE and STANDBY flags also need to be set).
To change the values, run
sdparm -p po -s ICT=12000 /dev/sr0
sdparm -p po -s SCT=12000 /dev/sr0
(this will set both to 20 minutes).
Once you have settings which work for you, you can store them as power-on defaults for the drive with the -S option:
sdparm -S -p po -s ICT=12000 /dev/sr0
sdparm -S -p po -s SCT=12000 /dev/sr0
| Optical drive power settings: Wait longer before spinning down |
1,425,182,803,000 |
If I boot Linux without installation (i.e. from live DVD), how can I load the entire disc into RAM in order to be able to re-use the drive?
It is possible with PartedMagic already. How can I do it with Linux Mint?
|
Use the boot option toram.
For details see -
Re: Loading persistent USB Flash to RAM - answer on Linux Mint forum.
casper - a hook for initramfs-tools to boot live systems - Ubuntu manpages.
| Load entire live system into RAM? |
1,425,182,803,000 |
I have written a shell script to create a squashfs live system from a hard drive installation, in order to have a linux system running only in ram.
But when I run the toram live system, I have the following error in the dmesg:
systemd[1]: Failed to open /dev/shm device, ignoring: Inappropriate ioctl for device
Despite this error, the toram live system seems to work without problems.
The script works only if only one user is created on the installed Linux, because ACL are not supported by squashfs, but needed for the directory /media/username/.
Here is the script:
#!/bin/bash
# Destination directory:
DEST=$HOME/squashfs
sudo mkdir -p ${DEST}
# Copying installation in destination directory:
sudo rsync --progress --specials --perms -av -lXEog --delete / ${DEST} --one-file-system \
--exclude=/proc/* --exclude=/tmp/* --exclude=/dev/* \
--exclude=/sys/* --exclude=/boot/* \
--exclude=/etc/mtab --exclude=${DEST}
# Make /media/username ownership to the user, because squashfs doesn't support ACL
MEDIA="$USER:$USER $DEST/media/$USER"
sudo chown $MEDIA
# Remove links to mounted drives in destination directory /media/username/:
MEDIA="$DEST/media/$USER"
sudo rm -f $MEDIA/*
# Remove unwanted entries in fstab of the future live system:
sudo sed -i '/\/boot\/efi/d' ${DEST}/etc/fstab
sudo sed -i '/swap/d' ${DEST}/etc/fstab
sudo sed -i '/UUID=.*\ \/\ /d' ${DEST}/etc/fstab
# Mount special files in order to chroot:
sudo mount -o bind /proc ${DEST}/proc
sudo mount -o bind /dev ${DEST}/dev
sudo mount -o bind /dev/pts ${DEST}/dev/pts
sudo mount -o bind /sys ${DEST}/sys
sudo cp /etc/resolv.conf ${DEST}/etc/resolve.conf
# upgrade the chrooted system, and install live-boot package
# as well as the lastest kernel:
sudo chroot ${DEST} apt-get update
sudo chroot ${DEST} apt-get upgrade
sudo chroot ${DEST} apt-get remove linux-image*
sudo chroot ${DEST} apt-get install live-boot
sudo chroot ${DEST} apt-get install linux-image-amd64
sudo chroot ${DEST} apt-get clean
sudo chroot ${DEST} apt clean
# Umount the special files:
sudo umount ${DEST}/proc
sudo umount ${DEST}/dev/pts
sudo umount ${DEST}/dev
sudo umount ${DEST}/sys
# Delete unwanted files:
[ -n "$DEST" ] && sudo find ${DEST}/var/mail ${DEST}/var/lock ${DEST}/var/backups ${DEST}/var/tmp -type f -exec rm {} \;
# Delete only OLD log files:
[ -n "$DEST" ] && sudo find ${DEST}/var/log -type f -iregex '.*\.[0-9].*' -exec rm -v {} \;
[ -n "$DEST" ] && sudo find ${DEST}/var/log -type f -iname '*.gz' -exec rm -v {} \;
# Clean current log files:
[ -n "$DEST" ] && sudo find ${DEST}/var/log -type f | while read file; do echo -n '' | sudo tee $file; done
# Clean Pakcage cache:
[ -n "$DEST" ] && sudo rm -v ${DEST}/var/cache/apt/archives/*.deb
# Remove old kernel and initrd in the partition where will be installed the live system:
sudo rm -f /media/$USER/LIVE/live/initrd.img*
sudo rm -f /media/$USER/LIVE/live/vmlinuz*
# Copy new kernel and initrd in the partition where will be installed the live system:
sudo mv -f ${DEST}/boot/initrd.img* /media/$USER/LIVE/live/initrd.img
sudo mv -f ${DEST}/boot/vmlinuz* /media/$USER/LIVE/live/vmlinuz
# Remove old shquashfs in the partition where will be installed the live system:
sudo rm -f /media/$USER/LIVE/live/filesystem.squashfs
# Make the new squashfs:
sudo mksquashfs ${DEST} /media/$USER/LIVE/live/filesystem.squashfs -xattrs -processors 4 -noappend -always-use-fragments
`/media/$USER/LIVE/` is where the live system partition is mounted.
Then I boot the live system placed on a partition with the kernel options: `toram boot=live`
Edit:
When I run df command, it tells me that /dev/shm is mounted on /run/live/medium.
The command mount | grep medium tells me that /dev/shm is also mounted on /usr/lib/live/mount/medium.
It seems the system keeps in RAM a copy of the partition where is the squashfs.
When I want to umount /run/live/medium, it tells me that it is impossible because the target is active. But I have successfully umounted /usr/lib/live/mount/medium.
So I wonder if these problems are linked and if there is a way to umount /run/live/medium ?
|
This is caused by a bug in live-boot. The bug is already fixed upstream, but it will take some time until a release is made. In the meantime, you can just ignore the error log (it's harmless). Or you can patch it yourself:
if [ -f ${DEST}/lib/live/boot/9990-toram-todisk.sh ]; then
sed -i 's|dev="/dev/shm"|dev="tmpfs"|g' ${DEST}/lib/live/boot/9990-toram-todisk.sh
fi
Explanation of the bug: live-boot mounts a tmpfs using /dev/shm as the device name. But a tmpfs does not actually have an underlying device, so you can use anything as the device name. Usually, tmpfs is used. If /dev/shm is used, this confuses systemd, because it tries to open /dev/shm as a device, which fails because /dev/shm is not a device.
| Debian Linux live system toram: Failed to open /dev/shm device |
1,425,182,803,000 |
I want to run a few diagnostic scripts in a FreeBSD live CD (the scripts are for FreeBSD, so...). But I would like to monitor the HDD temperature while doing that, so is it possible to install any software (namely smartctl) inside a FreeBSD live cd-booted system?
As far as I found out, there is no pkg present in the live CD and all directories but /tmp are read-only. I have no idea where to start.
|
You can do that. But have you considered that you might have better options for your task?
Rather than the "Live CD" you could download the "memstick" version. So rather than booting a CD you would boot a USB stick and have a writeable system. And if the system is networked I would PXE boot a minimal image.
What you do not state but I expect you know is that you could download the .tbz file of the package and do the install using pkg_add package-name.tbz. As it by default tries to install into /usr/local/bin which is read-only you have the problem. While you could play around with @option extract-in-place you could easily run into post-install problems. You could also move the pertinent directories to a tmpfs filesystem. But it is not worth the hassle.
Your more reasonable options are:
1. Easy - simple copy
If it is a simple binary you could just ftp/sftp the file(s) into your /tmp. Then do a chmod +x on the executeable and run it from there.
2. Easy - NFS infrastructure
You could setup a NFS share on your network. Mount it from your "Live CD" system. Have your executables on that share and run it from there.
3. Intermediate but effort needed - Build your own.
As you note the CD is read-only. But you can prepare your own image. Then you decide which files goes onto the system. To build a system you need to have the FreeBSD source on your system. Then it is as simple as:
cd /usr/src
make buildworld buildkernel
cd release
make release
make install DESTDIR=/var/myrelease
You will then find the release images in /var/myrelease when done. The hard part is then to understand the system and where to make additions. You will probably need to set SRC_CONF (see src.conf(5)) and you can learn a lot from release(7)
If you do not want to make the full release you can simply do make cdrom.
UPDATE: I just happened by sysutils/packmule which is a tool that helps you do exactly this. I have not tried it myself but it looks quite straightforward.
4. Intermediate but common - Build mfsbsd
Rather than starting with the official FreeBSD Live CD it is very common to use mfsbsd instead. It is a minimal system but very easy to work with. You can download readymade images from the homepage but you can easily build your own.
It is easy adding packages by copying the .tbz files that should be automatically installed into the packages/ directory.
mkdir ~/src
git clone https://github.com/mmatuska/mfsbsd.git ~/src/mfsbsd
cd ~/src/mfsbsd
mkdir packages
Now copy the *.tbz files to ~/src/mfsbsd/packages/ - then...
make iso BASE=/cdrom/usr/freebsd-dist
make iso BASE=/cdrom/12.2-RELEASE
make iso CUSTOM=1 BUILDWORLD=1 BUILDKERNEL=1
This would be my preferred method.
Alternative: PXE
You do not state if you are actually booting from a CD. Or maybe doing a virtual boot with an .iso image in a VM. But for then kind of work you are playing around with it could quickly be worth looking into NFS (networked filesystem) and PXE (Network boot). Depending on your skill level it might be too soon but have a look at PXE Booting Utilities With FreeBSD for a nice full description on how to setup a full environment. At the end it also describes how to boot a full FreeBSD Live system with NFS. This can give you some pointers to further exploration.
| Can I install software in FreeBSD live CD? |
1,668,793,372,000 |
I have a mid-old laptop that I want to use for learning more details about Linux, so I decided for a first-time Gentoo installation. I can only connect via WLAN to internet with this laptop. My router only supports WPA(2). My biggest USB stick has 2GB for the live linux, so the full Gentoo live DVD (larger than 2GB) is no option. The minimalist Gentoo has no wpa_supplicant (which would be needed for WPA), however.
What is the best option for me to follow the Gentoo manual as closely as possible?
|
Just Download a Live-Distro of your choice (with wpa_supplicant) with the same arch (32/64 bit) you'll choose for gentoo later, too
Create a bootable USB-Stick from it
Boot from the USB-Stick
Most of the upcomping steps require root privileges, so you could do a su in your Live-Distro and go on as root.
Create your partitions (/boot,/home/,/) e.g. with fdisk on the hdd of the laptop
Create FileSystem with:
mkfs.ext2 /dev/WHATEVER_YOUR_BOOT_IS,
mkfs.ext4 /dev/WHATEVER_YOUR_HOME_IS
and
mkfs.ext4 /dev/WHATEVER_YOUR_ROOT_IS
mkdir /mnt/gentoo
Mount /-partition to /mnt/gentoo with:
mount /dev/WHATEVER_YOUR_ROOT_IS /mnt/gentoo
Create directories for mounting /home and /boot with:
mkdir /mnt/gentoo/{boot,home}
Mount /home and /boot with:
mount /dev/WHATEVER_YOUR_BOOT_IS /mnt/gentoo/boot
and
mount /dev/WHATEVER_YOUR_HOME_IS /mnt/gentoo/home
Download stage3 gentoo 32/64 bit (not hardened for now) to the User directory of your live-system
In the directory of the stage3:
tar -C /mnt/gentoo -xjf stage3.....tar.bz2 (that could take a moment)
Now chroot into your new gentoo
Install wpa_supplicant and bootloater
Shutdown Live-USB, unplug USB-Stick, reboot
Now you should be able to boot into your gentoo (i hope i didn't missed anything :>)
| What live linux smaller 2GB and with pre-installed `wpa_supplicant` is suited best for Gentoo installation? |
1,668,793,372,000 |
I have got a DVD-RW on which I'd like to install some linux distributions, but I've tried a few (Ubuntu 13.10, Crunchbang, ElementaryOS and some others, don't really remember), but they didn't work, which has me worried that it might not be possible.
The problem is that I will be using both my USB ports and my HDD for data, so that is not an option.
I'd just like to know if it was possible to install a distribution to my DVD-RW?
If I need to change some config files in the source and compile it myself (which I think is what I need to do), I can do that myself if you could tell me where to look.
|
Use a Ubuntu liveCD with persistant storage.
create a file called casper-rw.
touch /media/casper-rw
Then run the folowing command.
dd if=/dev/zero of=/media/casper-rw bs=1M count=128
You will then be able to boot into the liveCD like normal, but any changes made will be saved in the casper-rw file.
| How can I install a Linux distribution to my DVD-RW? |
1,668,793,372,000 |
Can we make a live dvd with a dual boot option between Ubuntu and Linux mint?
I have both the iso's. They both are larger than 800mb. If I will burn 2 dvd's, Then a lot of space will be wasted. So I wanted to make one live cd that can boot both of them.
I have searched the net but did not find any result.
|
I have used Sardu for that job. It allows you to create a multiboot DVD and you choose the distroes you would like to multi boot into. It involves downloading the distros and this program will then boot thse distroes.
Details here the process is detailed but its worth the trouble.
| How to make dual boot live dvd |
1,668,793,372,000 |
So I finally decided to move from Windows to Linux. But before that I have wanted to try Fedora distribution from LiveCD. So I downloaded Fedora 17 .iso file, unpacked it and then burned on my CD. I rebooted the computer and saw message saying "Select proper boot device" (and yes, I change settings right in BIOS). So my next step was creating LiveUSB with help of this creator. I still got the same message, even on my friends computer.
File structure of the LiveUSB (CD) looks like this
boot
EFI
LiveOS
syslinux
GPL (file)
Can anybody advice what could possibly be the problem?
|
Maybe .iso file isn't good or maybe you made some mistake before burning it. Try not to unpack .iso file. Just download CDBurnerXP (it is free) or something like that, choose option "Burn ISO image" and program will do everything for you (unpack and burn).
Give it a try ;)
| Unable to boot from Fedora LiveCD |
1,668,793,372,000 |
I am building a custom Alpine image based on isolinux.
Basically, I am squashing rootfs, and mounting it as overlayfs.
Bootloader does its job fine, kernel loads, but I am stuck at initramfs. Let say I have the following:
#!/bin/sh
export PATH=/sbin:/usr/sbin:/bin:/usr/bin
/bin/busybox --install -s
rescue_shell() {
echo "Something went wrong. Dropping you to a shell."
#/bin/busybox --install -s
/bin/sh || exec /bin/busybox sh
}
mount -t sysfs sysfs /sys
mount -t proc proc /proc
mkdir -p /dev/pts
mount -t devtmpfs -o exec,nosuid,mode=0755,size=2M devtmpfs /dev 2>/dev/null \
|| mount -t tmpfs -o exec,nosuid,mode=0755,size=2M tmpfs /dev
[ -c /dev/ptmx ] || mknod -m 666 /dev/ptmx c 5 2
[ -d /dev/pts ] || mkdir -m 755 /dev/pts
mount -t devpts -o gid=5,mode=0620,noexec,nosuid devpts /dev/pts
# shared memory area (later system will need it)
[ -d /dev/shm ] || mkdir /dev/shm
mount -t tmpfs -o nodev,nosuid,noexec shm /dev/shm
/bin/sh
# other code left for simplicity
So once I enters /bin/sh, I don't have any modules loaded, especially I am meaning for block devices, /dev/sda, /dev/sr0, which I need to mount and then extract squashed image, and mount overlay.
Listing /proc/partitions gives me only ram[0-15] devices, which make sense since after boot it's loaded into RAM.
So, my question would be, is there any way that devices gets probed based on available hardware? I have tried with mdev as well, but still can not get my block devices. Proper mdev.conf is there, and tests are performed in VirtualBox.
Thank you.
|
You could try your luck with modalias as exposed through the sysfs interface.
See for example https://patchwork.openembedded.org/patch/148854/ which suggests:
echo "/sbin/mdev" > /proc/sys/kernel/hotplug
mdev -s
find /sys/ -name modalias -print0 | xargs -0 sort -u -z | xargs -0 modprobe -abq
Note that I haven't tested this myself. Also this doesn't seem to be using BusyBox modprobe, which probably does not support -ab. Still, it might be worth checking out what your /sys looks like in early initramfs.
More links regarding modalias:
https://wiki.archlinux.org/index.php/Modalias
http://people.skolelinux.org/pere/blog/Modalias_strings___a_practical_way_to_map__stuff__to_hardware.html
How to assign USB driver to device
| How to autoprobe block devices in initramfs? |
1,668,793,372,000 |
On a Mint 19 Mate pen drive persistent setup, I attempt to copy the casper-rw persistent file limited to 4GB to an ext4 partition.
I am looking for the steps to transfer applications and data to an xt4 casper-rw partition and boot it from the pendrive.
Steps so far:
I have created an ext4 partition named casper-rw
I copied all casper-rw files using rsync -r -p -o -E
I removed casper-rw from the pendrive.
I rebooted counting on the ext4 casper-rw partition getting priority over the casper-rw file. The ext4 casper-rw partition appeared as casper-rw but was accessed as /casper-rw1.
On reboot there was one "cow format specified as overlayfs and no support found" error.
A second reboot brought Mint back with the ext4 casper-rw partition now mounted as casper-rw while the casper-rw file partition was also accessible.
On the next reboot expecting to boot using the ext4 casper-rw partition, "cow overlaysfs" again.
Removing the casper-rw file or backtracking to a previous casper-rw saved file resulted in the same error.
casper-rw ext4 partition could not be renamed by Windows EaseUS Partition Master. Using SystemRescueCd iso loaded on YUBI I was able to use gparted to change the ext4 casper-rw partition name.
Now I am back up with the casper-rw file mounted.
What I can try to move forward?
casper package 1.394
ubiquity-casper 1.394
lupin-casper 0.57build1
|
I downloaded linuxmint-19-cinnamon-64bit.iso last July, and mkusb can make a persistent live drive from it. I tested right now (using the default settings) with a Sandisk Extreme 16 GB USB3 pendrive.
You can see in the screenshot that the data of the root partition is the same as the casper-rw partition according to df. This indicates that the persistence works.
It is possible to grab more of the available drive space for persistence. If you select 100%, there will be no usbdata partition with NTFS.
| Mint "cow format specified as overlayfs and no support found" error |
1,668,793,372,000 |
I'm a new user of Linux and I installed Linux Mint 18.2.
I created 2 partitions:
Boot / Root (Namely /).
Home (/home).
I underestimated the size needed for the Boot / Root and only set it 25GB.
I now need to resize it.
I used LiveCD to run GParted and this is what I have:
Can anyone guide me how can I resize the partitions without loosing any data?
It seems I must delete the sdf2 partition completely which means data is lost.
Is there any other way to do it (I don't have anything besides the disks above)?
On worse case scenario I don't mind losing all data on /home but I want the system to work as before.
Please guide me and remember those are my first steps in the Linux World.
Thank You.
|
In GParted:
First "Move/Resize" /dev/sda5to the right. To do that, right click on the line reading "/dev/sdb5", select "Resize/Move"; then in the next window drag the handle on the left of the partition to the right (as far as you want to reclaim free space to the left), or modify the value for "Free space preceeding" . Select "Resize/Move" button and then "Apply all Operations", which will then take some time as the data have to be moved to the right.
When finished, do the same with the extended partition, i.e. select "extended partition", resize to the right as far as possible.
Last, select "/dev/sda1", select "Resize/Move" again, and extend to the right into the now unallocated space.
If you do not want to play around with partitions, you might use some space in /home to hold data from /using softlinks. Find some folder with lots of data (e.g. using du -hs /*), move it somewhere in /home, and create a softlink: ln -s /home/<path_to_new_folder> /<name_of_moved_folder>. You should not do this for system folders like /bin, /usr or /var, but maybe for subfolders (e.g. /var/log).
| Resize Boot Partition Next to An Extended Partition |
1,668,793,372,000 |
Is there any Linux distribution with preinstalled NVIDIA CUDA support that could be launched from a live CD/USB drive?
|
Quote from https://superuser.com/questions/72226/linux-live-cd-for-distributed-computing-projects
Dotsch/UX is one.
Dotsch/UX - A USB/Diskless/Harddisk
BOINC Ubuntu Linux Distribution
The purpose is to make a Linux
distribution for BOINC which easily
installs and boot from a USB stick,
hard disk and from diskless clients
and also has some interfaces to setup
the diskless server and the clients
automatically.
BOINC Client : The BOINC client comes
pre installed and would be started as
daemon and would be monitored and kept
alive from this daemon. Dotsch/UX 1.0
includes the BOINC client 6.2.15.
Dotsch/UX 1.1 includes the BOINC
client 6.4.5 for CUDA support.
| Live Linux distribution with preinstalled NVIDIA CUDA support |
1,668,793,372,000 |
I currently have a few VM images of both Debian and Ubuntu. I was thinking it might be nice to burn them as a live CD on DVD-media and carry it around with me in case I need it whatever reason.
Is it possible to take my current Linux installation and burn it as a LiveCD or USB? The problem I thought I may have is the fact its on read only media. I'm using Debian and I may want to do this with Ubuntu as well.
|
On Ubuntu you have Remastersys. To install it use the following command
sudo apt-get install remastersys
To make a distributable livecd/dvd of your system use command
sudo remastersys dist
this will create iso image in /home/remastersys/ folder. Then burn it! :)
LINK http://www.ubuntugeek.com/creating-custom-ubuntu-live-cd-with-remastersys.html
| Linux installation to LiveCD? |
1,668,793,372,000 |
I'm currently running Windows 7 x64 Professional and want to dual-boot with CentOS.
I made a disk partitioning and wrote an image of CentOS on flash drive. But when it is being booted there's no option to install linux - only
Boot,
Boot(textual mode),
Memory Test,
Network installation and
Boot from local drive (which is booting Windows).
I want to install linux, not to use livecd all the time, what's the problem?
|
This posting is a bit old, but apparently the LiveCD doesn't have an "install" option. You need to get the regular install iso.
| No installation option with livecd |
1,668,793,372,000 |
I am using an ubuntu live cd to help me recover some data off of a hard drive. I used lshw -C disk to find out which device I need to copy, /dev/sda in this case.
I am using ddrescue -n to try and recover some data from a failing hard drive. It stops at 100GB of a 500GB hard drive. After it finishes sudo lshw -C disk does not print anything.
The next step in using ddrescue is to use sudo ddrescue -r 1 /dev/sda, but it reports there is no such file or directory.
What is going on; why is lshw failing to report anything?
Edit: Added sudo to relevant places.
|
It looks like the way in which your disk is failing is so bad that the kernel becomes unable to keep communicating with the disk.
There are probably a lot of errors concerning the disk in /var/log/kern.log. If you post its contents here, people might have tips to help you recover more. (Post only the part from the first disk error, presumably triggered during the ddrescue -n, to the point where the kernel deactivates sda; if there's a long and repetitive bit in the middle, it's ok to cut the repetitions.) But don't expect miracles, there's a chance that the last 400GB are simply beyond recovery without spending thousands of dollars on a professional service.
| "lshw -C disk" returns but prints nothing |
1,668,793,372,000 |
I want to make a persistent live boot in which I can store my data on a debian iso booted from a hard drive. So I downloaded debian-live (here), modified the grub entry to be able to boot into the live-system:`
menuentry "Debian modified" {
set iso_path="/live-boot/debian-live.iso"
export iso_path
loopback loop $iso_path
set root=(loop)
set loopback="findiso=${iso_path}"
export loopback
linux /live/vmlinuz-5.10.0-13-amd64 boot=live persistence components keyboard-layouts=de splash verbose "$loopback"
initrd /live/initrd.img-5.10.0-13-amd64
}
I can boot into my live system but when I want to store data in it, after a reboot the data is lost. Am I doing something wrong in the grub entry here?
Here could be also useful information for you, df -ha on the live boot gives me the following output (shortened to relevant parts):
FS Size Used Avail Use Mounted on
/dev/sda1 ... ... ... ... /run/live/persistence/sda1 <= my main partition (also from where the boot is happening)
/dev/loop0 .. ... ... ... /run/live/medium
/dev/loop1 .. ... ... ... /run/live/rootfs/filesystem.squashfs
tmpfs ... ... ... ... /run/live/overlay
overlay ... ... ... ... /
tmpfs ... ... ... ... /usr/lib/live/mount
/dev/loop0 .. ... ... ... /usr/lib/live/mount/medium
/dev/loop1 .. ... ... ... /usr/lib/live/mount/rootfs/filesystem.squashfs
/dev/sda1 .. ... ... ... /usr/lib/live/mount/persistence/sda1
tmpfs ... ... ... ... /usr/lib/live/mount/overlay
and the fstab on the live boot gives me the following output:
overlay / overlay rw 0 0
tmpfs /tmp tmpfs nosuid,nodev 0 0
The mount | grep overlay returns:
tmpfs on run/live/overlay type tmpfs (rw,noatime,mode=755)
overlay on / type overlay (rw,noatime,lowerdir=/run/live/rootfs/filesystem.squashfs/,upperdir=/run/live/overlay/rw,workdir=/run/live/overlay/work)
tmpfs on /usr/lib/live/mount/overlay type tmpfs (rw,noatime,mode=755)
I also manually tried to mount the overlay directly to the persistent partition (sda1) with the persistent storage as upperdir / workdir which results in
overlay on / type overlay (rw,noatime,lowerdir=/run/live/rootfs/filesystem.squashfs/,upperdir=/run/live/persitence/sda1/rw,workdir=/run/live/persistence/sda1/work
of course, I created those directories there on the persistent partition ;-) ... but it is still not working to store data persistently and I don't know what to do.
So how can I modify this all to be able to store data on the live-iso and when rebooting without losing the data stored?
|
Original advice
You can use mkusb to create a persistent live drive from the current Debian live iso files.
mkusb-dus and select 'dus-persistent'
This 'classic mkusb' method creates partition table and several partitions. See details at this link and that link.
mkusb-plug and select 'persistent' (or in mkusb-sedd '--pder')
This method 'semi-clones' the iso file using sed to get the boot option persistent into the otherwise cloned content from the iso file to the target device. In addition to that the remaining part of the drive 'behind' the copy of the iso file will be formatted into a file system that will provide persistence.
It should be straight-forward. mkusb-dus works both in graphical mode and text mode. mkusb-plug works only in graphical mode, but you can use mkusb-sedd 'manually' in text mode and get the same result, because mkusb-plug is a graphical front end of mkusb-sedd. See details at this link.
Advice modified after discussion with the OP
After a discussion (in comments) I suggest the following method to create a persistent live Lubuntu system in a drive with Ubuntu Server. (You may prefer another Ubuntu flavour, all current official Ubuntu desktop systems and community flavours should work here.)
I created an Ubuntu Server 22.04 LTS by extracting jammy-preinstalled-server-amd64.img.xz and cloning it to an SSD. It is convenient with mkusb.
I booted into Ubuntu Server and checked that it works correctly. The user name is 'ubuntu'. In this process I also changed password from 'ubuntu' to something else. The server expanded its root partition automatically to use the whole drive.
I see no good way to use a file for persistence, instead I suggest to boot from another drive and use gparted to shrink the root partition of Ubuntu Server (by moving its tail end), and in the unallocated space created, create a new partition with the ext4 file system and label it writable.
You can see a padlock symbol for /dev/sdc1. It indicates that the partition is mounted, and you must unmount it before you can shrink it.
Then boot into the Ubuntu Server and create a directory Live in the home directory: /home/ubuntu/Live
Get lubuntu-22.04-desktop-amd64.iso for example via sftp into this directory.
create a menuentry for the persistent live system
sudo nano /etc/grub.d/40_custom
and make it look something like the following
#!/bin/sh
exec tail -n +3 $0
# This file provides an easy way to add custom menu entries. Simply type the
# menu entries you want to add after this comment. Be careful not to change
# the 'exec tail' line above.
#
menuentry "Lubuntu 22.04 ISO" {
set isofile="/home/ubuntu/Live/lubuntu-22.04-desktop-amd64.iso"
# or set isofile="/<username>/Live/lubuntu-22.04-desktop-amd64.iso"
# if you use a single partition for your $HOME
rmmod tpm
search --set=root --fs-uuid cc633893-7fde-4185-b852-7b886f51ff7f
loopback loop ($root)/$isofile
linux (loop)/casper/vmlinuz boot=casper iso-scan/filename=$isofile noprompt noeject persistent
initrd (loop)/casper/initrd
}
set timeout_style=menu
set timeout=10
Please notice that
I use the UUID of the root partition of Ubuntu Server cc633... in order to identify the target device in a reliable way. You can find this UUID via
lsblk -o name,size,uuid
you should modify this UUID to match your particular system.
persistent is added to the 'standard' live linux command line.
The two last lines provide a 'hard' way to make the grub menu be displayed.
These edits of /etc/grub.d/40_custom will be activated by running
sudo update-grub
and you should notice it the next time you boot (or reboot).
| Persistent Debian Live HDD |
1,668,793,372,000 |
I'm configuring an archiso profile to correctly implement an user managed by systemd-home in its generated iso. User's home directory doesn't need to be encrypted.
How can I do that?
PS: it seems there is no systemd-home tag.
PPS: I guess the answer to this question also answers how to easily migrate an user managed from systemd-home to a new system?
|
Suppose you've created the user user with UID and GID 60101 and that you're running your live cd with the standard user structure in place.
Prerequisites
Users managed by systemd-home need site-specific configurations which
depend on the system's machine-id. Since you're using a live cd system
you need to set a static machine-id first.
To do so, add the systemd.machine_id=<your_machine_id>
option to kernel boot parameters. You can get a system's machine-id with
systemd-machine-id-setup --print
NB: machine-ids should be considered "confidential", and must not be exposed in untrusted environments, in particular on the network.
Setup
Enable the systemd-homed service (on a live cd it implies
to setup the correct symlinks for the associated services in /etc/systemd/system).
Set systemd-home configuration file so that it won't create an encrypted home:
/etc/systemd/homed.conf
‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾
DefaultStorage=directory
Remove any user entry from
/etc/passwd
/etc/gshadow
/etc/shadow
/etc/group
Move user directory from /home/user to /home/user.saved
Let user be managed by systemd-home:
homectl create user --uid=60101 \
--real-name="Standard Computer User" \
--shell=/usr/bin/zsh
Move old home over new home:
homectl with foobar -- rsync -aHAXv \
--remove-source-files \
/home/foobar.saved/ .
Check everything has been correctly passed over.
Copy
/etc/systemd/homed.conf
/home/user.homedir
/var/lib/systemd/home
back to the airootfs of your archiso profile.
References
machine-id
Converting Existing Users to systemd-homed
Users, Groups, UIDs and GIDs on systemd Systems
JSON User Records
systemd-homed - ArchWiki
| Use systemd-home on a live cd system |
1,668,793,372,000 |
Edit 1:
The freezing just happened and I was able recover from it. Log(syslog) from the freezing until 'now': https://ufile.io/ivred
Edit 2: It seems a bug/problem with GDM3. I'll try Xubuntu.
Edit 3: Now I'm using Xubuntu. The problem still happens, but a lot less often. So.. it is indeed a memory issue.
I'm currently using Ubuntu 18.10 Live CD since my HD died. I did some customizations to my LiveCD mainly towards memory consumption, because I have only 4GB of RAM.
When my free memory goes below 100MB, my pendrive LED starts to blink like crazy and the system freezes letting me time to just get out of GUI interface (Ctrl+Alt+f1...f12) and reboot(Ctrl+Alt+Del) or, sometimes to close Google Chrome with sudo killall chrome.
So I created a very simple script to clean the system cache and close Google Chrome. Closing Chrome out of the blue like that is fine, since it asks you to recover the tabs when it wasn't closed properly.
The question: It works like a charm 95% of the time. I don't know if my script is too simple or there is another reason for this intermittent freezing since I can't check the log, because of the need of reboot. Is there a more efficient way to do that? Am I doing it wrong?
Obs.: I have another script to clean the cache that runs every 15 minutes. Since I created those scripts I am able to use my LiveCD every day with almost no freezing. Maybe 1 per day.. Before that I had to reboot every 30-40min, because I use the Chrome with several tabs.
My script:
#!/bin/bash
while true ; do
free=`free -m | grep Mem | awk '{print $4}'`
if [ "$free" -gt 0 ]
then
if [ $free -le 120 ]; #When my memory consuptiom goes below 120MB do the commands below.
then
if pgrep -x "chrome" > /dev/null
then
sudo killall -9 chrome
sudo su xubuntu
/usr/bin/google-chrome-stable --password-store=basic --aggressive-cache-discard --aggressive-tab-discard
else
echo "Stopped"
fi
sudo sysctl -w vm.drop_caches=3
sudo sync && echo 3 | sudo tee /proc/sys/vm/drop_caches
fi
fi & sleep 1; done
|
The solution that "fra-san" gave at the comments here fitted perfectly. Using "cgroup-tools" package I was able to limit Chrome memory usage successfully. I tested it opening dozens of tabs at the same time and I could see the memory limit in action. However, I had to leave my script running, since, as much as I mostly use Chrome, system cache consumes a lot of RAM too.
My steps:
1- Used this script from: Limit memory usage for a single Linux process
#!/bin/sh
# This script uses commands from the cgroup-tools package. The cgroup-tools commands access the cgroup filesystem directly which is against the (new-ish) kernel's requirement that cgroups are managed by a single entity (which usually will be systemd). Additionally there is a v2 cgroup api in development which will probably replace the existing api at some point. So expect this script to break in the future. The correct way forward would be to use systemd's apis to create the cgroups, but afaik systemd currently (feb 2018) only exposes dbus apis for which there are no command line tools yet, and I didn't feel like writing those.
# strict mode: error if commands fail or if unset variables are used
set -eu
if [ "$#" -lt 2 ]
then
echo Usage: `basename $0` "<limit> <command>..."
echo or: `basename $0` "<memlimit> -s <swaplimit> <command>..."
exit 1
fi
cgname="limitmem_$$"
# parse command line args and find limits
limit="$1"
swaplimit="$limit"
shift
if [ "$1" = "-s" ]
then
shift
swaplimit="$1"
shift
fi
if [ "$1" = -- ]
then
shift
fi
if [ "$limit" = "$swaplimit" ]
then
memsw=0
echo "limiting memory to $limit (cgroup $cgname) for command $@" >&2
else
memsw=1
echo "limiting memory to $limit and total virtual memory to $swaplimit (cgroup $cgname) for command $@" >&2
fi
# create cgroup
sudo cgcreate -g "memory:$cgname"
sudo cgset -r memory.limit_in_bytes="$limit" "$cgname"
bytes_limit=`cgget -g "memory:$cgname" | grep memory.limit_in_bytes | cut -d\ -f2`
# try also limiting swap usage, but this fails if the system has no swap
if sudo cgset -r memory.memsw.limit_in_bytes="$swaplimit" "$cgname"
then
bytes_swap_limit=`cgget -g "memory:$cgname" | grep memory.memsw.limit_in_bytes | cut -d\ -f2`
else
echo "failed to limit swap"
memsw=0
fi
2- Named it as "limitmem" and copied to /usr/bin/ so I could call it from terminal just with limitmem. Now I can open a process limiting the memory usage to, for example, 800MB using this syntax:limitmem 800M command
In my case: limitmem 1000M google-chrome --password-store=basic --aggressive-cache-discard --aggressive-tab-discard
| Shell-script to periodically free some memory up with Ubuntu 18.10 LiveCD with only 4GB of RAM. Need some improvement |
1,668,793,372,000 |
Is there any way to mount an ext4 partition in another PC, running Windows, at the same network in my Ubuntu Live?
My HD just died earlier today and I needed to use Live Distros until I get a new one. I choose Ubuntu 18.10. I customized my Ubuntu Live and to do it I needed to make an EXT4 partition on my notebook HD(running Windows). I took the HD off and put it in my PC. I want to connect remotely so I won't need to take it off again.
My workaround (no success!):
I tried mount remotely by windows share a virtual HD image(created with Windows version of DD). This way I got to create the partition and edit my Ubuntu Live '.iso'. The problem was when I tried to copy my edited iso out of the HD virtual image. No matter to where I tried to copy I was getting I/O error at the end of the copy.
I can't set up a virtual machine on my notebook. It has only 2GB of ram.
|
Unfortunately¹, Windows cannot even read EXT4 partitions without third-party software. There are a few of them out there that can do local read-only mounting of EXT4 partitions but only one (commercial) that can do both reads and writes.
However, none of those will allow you to share these on a Windows Network: they're for local reading (or writing in one case) only.
So to have full access to your drive remotely you'll have to:
create an NTFS volume on your USB stick as Linux can easily read and write to NTFS volumes.
keep data that you want to access remotely on the NTFS volume (Documents, Videos, Music, whatever) as that's just native on Windows and Windows can share it just fine.
keep the data that you needed to be on the EXT4 where it is now.
Note¹: Actually for us Linux admins that's a fortunately because this way, Windows cannot mess up EXT4 partitions...
| Mount an ext4 partition in another PC(Windows) at the same network in my Ubuntu Live? |
1,668,793,372,000 |
Booting CentOS LiveCD and received a prompt (see image). What is this for?
|
It looks like a disk (or partition) decryption prompt.
| CentOS LiveCD Boot Prompt? |
1,668,793,372,000 |
I recently got a message while updating some programs that my /boot is full. I've been on other posts and everyone says delete files in /boot but I thought making it bigger seems more logical and better long-term if it gets full again. I right clicked on the /dev/sda2 and click resize but it doesn't move. Any ideas?
If this doesn't work I'll use SystemRescueCd and try again.
|
I figured it out. It's definitely due to the encryption. GParted sees such as un-partitioned - so can't do much to it.
Unfortunately it's going to be a command-line fix only. Here's a writeup on how to do it under Ubuntu: ResizeEncryptedPartitions - Community Help Wiki
| Can't resize /dev/sda2 extended partition with gparted live cd |
1,668,793,372,000 |
I'm working on distributing a version of Fedora to our internal development team using Live CD. There are certain files I would like to copy to the live cd that are not:
part of an RPM or
going to live in the user's home directory.
Based on samples I've seen I'm trying something like the following in the post section of my kickstart file, to no avail.
%post --nochroot
cp -ar /tmp/files2copy $LIVE_ROOT/files
%end
Where is $LIVE_ROOT? Does it need to be exported earlier in the *.ks file? Do I need to create the files directory using mkdir?
|
In this case, my problem was simply that the directory didn't exist. This did the trick.
%post --nochroot
$LIVE_ROOT=/my/root
mkdir $LIVE_ROOT
cp -ar /tmp/files2copy $LIVE_ROOT/files
%end
| Copy files from source machine to LiveCD |
1,379,695,526,000 |
The Arch wiki installation guide lists that more then a minimum of 512 MiB RAM is needed to boot the live installer.
Other users noted that ArchISO 202005 breaks with 512MB of RAM.
I can confirm that iso 202002 still boots OK with 512 MiB of memory. Release 202003 fails back to interactive rootfs prompt with logged errors:
Initramfs unpacking failed: write error
and
/dev/disk/by-label/ARCH_202003 device did not show up after 30 seconds.
Which minimum amount of memory is needed to boot the Arch Linux live iso?
|
740 MiB
Trial and error in a VM with 4 MiB increments, it turns out that the minimum amount of RAM for "current" archlinux-2022.09.03-x86_64 is 740 MB
History
720 MiB of RAM is required for the old archlinux-2021.07.01-x86_64.
656 MiB of RAM is required for the old archlinux-2021.02.01-x86_64.
520 MiB for the first release (archlinux-2020.03.01-x86_64.iso) that did no longer boot with 512 MiB.
| Which minimum amount of RAM is needed to boot the Arch Linux live iso? |
1,379,695,526,000 |
So I have a Scientific Linux LiveDVD. And I have it installed on a PC that has no hard drives configured and avaliable at all. I want to install some applications that would allow me to configure system before I would be able to install OS.
So I wonder: how to create a temporary installation folder that would exist only while OS is running in RAM, install applications into it (using standart installer yum) and be able to run them?
|
Yum will do that by default in Live mode; anything you install whilst running off a live optical disc is installed to RAM because you are running off of RAM as it is.
If you want to do it explicitly, though, you can create a RAM disk:
mkdir foo
mount -t tmpfs -o size=4096M bar /foo
where:
mount is the command.
-t tmpfs specifies the type of filesystem. In this case, the filesystem type is tmpfs
-o size=4096M is for options and in this case, define the size as roughly 4gb. You can obviously make it larger or smaller depending on your needs and available RAM.
bar is the label of the filesystem that you are creating. Name it whatever you please; you'll rarely see it.
/foo is the location you want to mount the RAM disk.
I do not see an advantage to doing it this way; the live environment's default should work just as well.
| How to install applications temporary into RAM on LiveCD? |
1,379,695,526,000 |
Is it possible to boot an install CD to RAM?
I want to boot the CD and eject it before I proceed with the setup.
And when not, where do the changes have to be made, to get a "toram" boot option?
|
It seems impossible to copy base packages from boot media to RAM to build an alternative APT repository for installation with current Debian Installer.
But you might be able to eject the media after boot and continue installation using "netboot" image which would download everything from the internet, not out of boot media. You can remove it permanently once the netboot installer gets started. You will be requested to configure networking at the very early step of the whole installation process.
You can find netboot ISO as mini.iso in the folder like http://ftp.debian.org/debian/dists/jessie/main/installer-amd64/current/images/netboot/ of the offcial repository.
It's quite small in size compared to other installer images like "netinst".
| Boot Debian netinst CD to RAM? |
1,379,695,526,000 |
Playing around with the GRUB2 configuration, I did something wrong, which caused the system to fail booting.
I would like to fix the issue by re-editing the GRUB configuration file using Fedora 16 LiveCD.
I proceeded as follow:
mount my system partition [OK]
become super user [OK]
sudo gedit the /etc/default/grub to restore it as it was prior [OK]
(Somehow I had to "sudo" although I already was super-user, that's confusing).
sudo grub2-mkconfig -o /media/MYPARTITION/boot/grub2/grub.cfg
The last step however returned the file system is read only. So I failed to restore the configuration of GRUB2 :(
How should I proceed?
|
You don't even need a livecd; you can correct it within grub. You can press e at the grub menu to edit the entry and fix whatever you broke, then ctrl-x to boot the corrected entry. Once the system is up and running, fix your cfg file permanently.
Doing it that way from the live cd, you need to not mount the partition read only. If you didn't mount it read only, then it must have an error that caused it to switch to read only, so you should fsck it before mounting.
| How to fix mistake in grub.cfg from LiveCD? |
1,379,695,526,000 |
I have a LiveCD started as a default user. How to login as root?
Here it is said that there are boot parameters. How and when to set them?
|
Have you tried just using su?
Most of the time the default user on a livecd has passwordless sudo, and can also su passwordlessly to any other user.
| How to login as superuser\administrator in Scientific Linux 6 LiveCD? |
1,379,695,526,000 |
I created a Debian Live Image, but I'm missing a step for getting the debian-live-installer to work. I took the following steps:
##needed packages
apt-get install --assume-yes xorriso live-build syslinux squashfs-tools
##create basic system
mkdir ~/livework && cd ~/livework
debootstrap --arch=amd64 wheezy chroot
##chroot
cd ~/livework
chroot chroot
mount none -t proc /proc
mount none -t sysfs /sys
mount none -t devpts /dev/pts
export HOME=/root
export LC_ALL=C
export PS1="\e[01;31m(live):\W \$ \e[00m"
##kernel and set password
apt-get install --assume-yes dialog dbus
dbus-uuidgen > /var/lib/dbus/machine-id
apt-get install --assume-yes linux-image-amd64 live-boot
passwd
##install packages
apt-get install --assume-yes xserver-xorg slim fluxbox debian-installer debian-installer-launcher
##finished chroot
apt-get clean
rm /var/lib/dbus/machine-id && rm -rf /tmp/*
umount /proc /sys /dev/pts
exit
##setup isolinux
cd ~/livework
rm chroot/root/.bash_history
mkdir -p binary/live && mkdir -p binary/isolinux
cp chroot/boot/vmlinuz-3.2.0-4-amd64 binary/live/vmlinuz
cp chroot/boot/initrd.img-3.2.0-4-amd64 binary/live/initrd
rm binary/live/filesystem.squashfs
mksquashfs chroot binary/live/filesystem.squashfs -comp xz -e boot
cp /usr/lib/syslinux/isolinux.bin binary/isolinux/.
cp /usr/lib/syslinux/menu.c32 binary/isolinux/.
##sample binary/isolinux/isolinux.cfg
ui menu.c32
prompt 0
menu title CASED Boot Menu
timeout 50
label live-amd64
menu label Live amd64
menu default
linux /live/vmlinuz
append initrd=/live/initrd boot=live persistence quiet
label live-amd64-failsafe
menu label ^Live (amd64 failsafe)
linux /live/vmlinuz
append initrd=/live/initrd boot=live persistence config memtest noapic noapm nodma nomce nolapic nomodeset nosmp nosplash vga=normal
endtext
##build iso
cd ~/livework
xorriso -as mkisofs -r -J -joliet-long -l -cache-inodes -isohybrid-mbr /usr/lib/syslinux/isohdpfx.bin -partition_offset 16 -A "Debian Live" -b isolinux/isolinux.bin -c isolinux/boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table -o my-debian.iso binary
After Logging into my Live System, I want to be able to install it to my hdd. By executing debian-installer-launcher i get the following error:
no suitable d-i initrd image found, aborting
I googled for it, but could not find any answer. What step am I missing?
|
Okay, solved it. The following steps did the job:
cd ~
## [ get debian-live iso... ]
## create folder for mounting the iso
mkdir debian
mount debian.iso debian/
## copy install (contains initrd(gtk/normal) and vmlinuz)
cp -r debian/install livework/binary/
## copy needed files for installation
cp -r debian/pool livework/binary/
cp -r debian/dists livework/binary/
## create iso
cd ~/livework
xorriso -as mkisofs -r -J -joliet-long -l -cache-inodes -isohybrid-mbr /usr/lib/syslinux/isohdpfx.bin -partition_offset 16 -A "Debian Live" -b isolinux/isolinux.bin -c isolinux/boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table -o my-debian.iso binary
Hope this might be helpful for somebody else too ;)
| Missing step: debian-installer-launcher |
1,379,695,526,000 |
I have a liveCD ubuntu Linux iso image. I need to update the kernel of this image. I've tried to get the iso on a host Ubuntu system and move the compiled kernel and the modules from there to the liveCD but after doing that, the system stuck at:
loading kernel /casper/vmlinuz.. done
loading file /casper/inited.img.. done
I think that the initrd can't find the root filesystem which is at "/casper/filesyste.squashfs".
Does anyone know a valid way to do this task? my next attempt would be to mount the root filesystem of liveCD on the host system and compile the new kernel from there.
|
Issue resolved by:
Installing the new kernel in the chroot environment of the liveCD root file system itself and then moved the new kernel (vmlinuz) and the initramfs image (initrd) to the /casper directory in the USB top filesystem.
| How to update the kernel for a live CD Linux image |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.