date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,463,327,505,000 |
when I try to list directories through Options Indexes-option with SetHandler application/x-httpd-php enabled in my apache-config I get a
attempt to invoke directory '/srv/http' as script error
in the apache-error-log or a less verbose
403 Forbidden error
when I try to access a directory from the browser.
This seems to be a well known problem, since there is a article in the apache wiki https://wiki.apache.org/httpd/DirectoryAsScript for this issue.
I updated my apache-conf file with the changes suggested in the article and double checked that I really edited the configuration file as suggested by the article, however I still get a "attempt to invoke directory '/srv/http' as a script error".
This is my full httpd.conf file:
#
# This is the main Apache HTTP server configuration file. It contains the
# configuration directives that give the server its instructions.
# See <URL:http://httpd.apache.org/docs/2.4/> for detailed information.
# In particular, see
# <URL:http://httpd.apache.org/docs/2.4/mod/directives.html>
# for a discussion of each configuration directive.
#
# Do NOT simply read the instructions in here without understanding
# what they do. They're here only as hints or reminders. If you are unsure
# consult the online docs. You have been warned.
#
# Configuration and logfile names: If the filenames you specify for many
# of the server's control files begin with "/" (or "drive:/" for Win32), the
# server will use that explicit path. If the filenames do *not* begin
# with "/", the value of ServerRoot is prepended -- so "logs/access_log"
# with ServerRoot set to "/usr/local/apache2" will be interpreted by the
# server as "/usr/local/apache2/logs/access_log", whereas "/logs/access_log"
# will be interpreted as '/logs/access_log'.
#
# ServerRoot: The top of the directory tree under which the server's
# configuration, error, and log files are kept.
#
# Do not add a slash at the end of the directory path. If you point
# ServerRoot at a non-local disk, be sure to specify a local disk on the
# Mutex directive, if file-based mutexes are used. If you wish to share the
# same ServerRoot for multiple httpd daemons, you will need to change at
# least PidFile.
#
ServerRoot "/etc/httpd"
#
# Mutex: Allows you to set the mutex mechanism and mutex file directory
# for individual mutexes, or change the global defaults
#
# Uncomment and change the directory if mutexes are file-based and the default
# mutex file directory is not on a local disk or is not appropriate for some
# other reason.
#
# Mutex default:/run/httpd
#
# Listen: Allows you to bind Apache to specific IP addresses and/or
# ports, instead of the default. See also the <VirtualHost>
# directive.
#
# Change this to Listen on specific IP addresses as shown below to
# prevent Apache from glomming onto all bound IP addresses.
#
#Listen 12.34.56.78:80
Listen 80
#
# Dynamic Shared Object (DSO) Support
#
# To be able to use the functionality of a module which was built as a DSO you
# have to place corresponding `LoadModule' lines at this location so the
# directives contained in it are actually available _before_ they are used.
# Statically compiled modules (those listed by `httpd -l') do not need
# to be loaded here.
#
# Example:
# LoadModule foo_module modules/mod_foo.so
#
LoadModule authn_file_module modules/mod_authn_file.so
#LoadModule authn_dbm_module modules/mod_authn_dbm.so
#LoadModule authn_anon_module modules/mod_authn_anon.so
#LoadModule authn_dbd_module modules/mod_authn_dbd.so
#LoadModule authn_socache_module modules/mod_authn_socache.so
LoadModule authn_core_module modules/mod_authn_core.so
LoadModule authz_host_module modules/mod_authz_host.so
LoadModule authz_groupfile_module modules/mod_authz_groupfile.so
LoadModule authz_user_module modules/mod_authz_user.so
#LoadModule authz_dbm_module modules/mod_authz_dbm.so
#LoadModule authz_owner_module modules/mod_authz_owner.so
#LoadModule authz_dbd_module modules/mod_authz_dbd.so
LoadModule authz_core_module modules/mod_authz_core.so
#LoadModule authnz_ldap_module modules/mod_authnz_ldap.so
#LoadModule authnz_fcgi_module modules/mod_authnz_fcgi.so
LoadModule access_compat_module modules/mod_access_compat.so
LoadModule auth_basic_module modules/mod_auth_basic.so
#LoadModule auth_form_module modules/mod_auth_form.so
#LoadModule auth_digest_module modules/mod_auth_digest.so
#LoadModule allowmethods_module modules/mod_allowmethods.so
#LoadModule file_cache_module modules/mod_file_cache.so
#LoadModule cache_module modules/mod_cache.so
#LoadModule cache_disk_module modules/mod_cache_disk.so
#LoadModule cache_socache_module modules/mod_cache_socache.so
#LoadModule socache_shmcb_module modules/mod_socache_shmcb.so
#LoadModule socache_dbm_module modules/mod_socache_dbm.so
#LoadModule socache_memcache_module modules/mod_socache_memcache.so
#LoadModule watchdog_module modules/mod_watchdog.so
#LoadModule macro_module modules/mod_macro.so
#LoadModule dbd_module modules/mod_dbd.so
#LoadModule dumpio_module modules/mod_dumpio.so
#LoadModule echo_module modules/mod_echo.so
#LoadModule buffer_module modules/mod_buffer.so
#LoadModule data_module modules/mod_data.so
#LoadModule ratelimit_module modules/mod_ratelimit.so
LoadModule reqtimeout_module modules/mod_reqtimeout.so
#LoadModule ext_filter_module modules/mod_ext_filter.so
#LoadModule request_module modules/mod_request.so
LoadModule include_module modules/mod_include.so
LoadModule filter_module modules/mod_filter.so
#LoadModule reflector_module modules/mod_reflector.so
#LoadModule substitute_module modules/mod_substitute.so
#LoadModule sed_module modules/mod_sed.so
#LoadModule charset_lite_module modules/mod_charset_lite.so
#LoadModule deflate_module modules/mod_deflate.so
#LoadModule xml2enc_module modules/mod_xml2enc.so
#LoadModule proxy_html_module modules/mod_proxy_html.so
LoadModule mime_module modules/mod_mime.so
#LoadModule http2_module modules/mod_http2.so
#LoadModule ldap_module modules/mod_ldap.so
LoadModule log_config_module modules/mod_log_config.so
#LoadModule log_debug_module modules/mod_log_debug.so
#LoadModule log_forensic_module modules/mod_log_forensic.so
#LoadModule logio_module modules/mod_logio.so
#LoadModule lua_module modules/mod_lua.so
LoadModule env_module modules/mod_env.so
#LoadModule mime_magic_module modules/mod_mime_magic.so
#LoadModule cern_meta_module modules/mod_cern_meta.so
#LoadModule expires_module modules/mod_expires.so
LoadModule headers_module modules/mod_headers.so
#LoadModule ident_module modules/mod_ident.so
#LoadModule usertrack_module modules/mod_usertrack.so
#LoadModule unique_id_module modules/mod_unique_id.so
LoadModule setenvif_module modules/mod_setenvif.so
LoadModule version_module modules/mod_version.so
#LoadModule remoteip_module modules/mod_remoteip.so
LoadModule proxy_module modules/mod_proxy.so
LoadModule proxy_connect_module modules/mod_proxy_connect.so
LoadModule proxy_ftp_module modules/mod_proxy_ftp.so
LoadModule proxy_http_module modules/mod_proxy_http.so
LoadModule proxy_fcgi_module modules/mod_proxy_fcgi.so
LoadModule proxy_scgi_module modules/mod_proxy_scgi.so
#LoadModule proxy_fdpass_module modules/mod_proxy_fdpass.so
LoadModule proxy_wstunnel_module modules/mod_proxy_wstunnel.so
LoadModule proxy_ajp_module modules/mod_proxy_ajp.so
LoadModule proxy_balancer_module modules/mod_proxy_balancer.so
LoadModule proxy_express_module modules/mod_proxy_express.so
#LoadModule session_module modules/mod_session.so
#LoadModule session_cookie_module modules/mod_session_cookie.so
#LoadModule session_crypto_module modules/mod_session_crypto.so
#LoadModule session_dbd_module modules/mod_session_dbd.so
LoadModule slotmem_shm_module modules/mod_slotmem_shm.so
#LoadModule slotmem_plain_module modules/mod_slotmem_plain.so
#LoadModule ssl_module modules/mod_ssl.so
#LoadModule dialup_module modules/mod_dialup.so
LoadModule lbmethod_byrequests_module modules/mod_lbmethod_byrequests.so
LoadModule lbmethod_bytraffic_module modules/mod_lbmethod_bytraffic.so
LoadModule lbmethod_bybusyness_module modules/mod_lbmethod_bybusyness.so
LoadModule lbmethod_heartbeat_module modules/mod_lbmethod_heartbeat.so
#LoadModule mpm_event_module modules/mod_mpm_event.so
LoadModule mpm_prefork_module modules/mod_mpm_prefork.so
#LoadModule mpm_worker_module modules/mod_mpm_worker.so
LoadModule unixd_module modules/mod_unixd.so
#LoadModule heartbeat_module modules/mod_heartbeat.so
#LoadModule heartmonitor_module modules/mod_heartmonitor.so
#LoadModule dav_module modules/mod_dav.so
LoadModule status_module modules/mod_status.so
LoadModule autoindex_module modules/mod_autoindex.so
#LoadModule asis_module modules/mod_asis.so
#LoadModule info_module modules/mod_info.so
#LoadModule suexec_module modules/mod_suexec.so
<IfModule !mpm_prefork_module>
#LoadModule cgid_module modules/mod_cgid.so
</IfModule>
<IfModule mpm_prefork_module>
#LoadModule cgi_module modules/mod_cgi.so
</IfModule>
#LoadModule dav_fs_module modules/mod_dav_fs.so
#LoadModule dav_lock_module modules/mod_dav_lock.so
#LoadModule vhost_alias_module modules/mod_vhost_alias.so
LoadModule negotiation_module modules/mod_negotiation.so
LoadModule dir_module modules/mod_dir.so
#LoadModule imagemap_module modules/mod_imagemap.so
#LoadModule actions_module modules/mod_actions.so
#LoadModule speling_module modules/mod_speling.so
LoadModule userdir_module modules/mod_userdir.so
LoadModule alias_module modules/mod_alias.so
LoadModule rewrite_module modules/mod_rewrite.so
LoadModule php7_module modules/libphp7.so
<IfModule unixd_module>
#
# If you wish httpd to run as a different user or group, you must run
# httpd as root initially and it will switch.
#
# User/Group: The name (or #number) of the user/group to run httpd as.
# It is usually good practice to create a dedicated user and group for
# running httpd, as with most system services.
#
User http
Group http
</IfModule>
# 'Main' server configuration
#
# The directives in this section set up the values used by the 'main'
# server, which responds to any requests that aren't handled by a
# <VirtualHost> definition. These values also provide defaults for
# any <VirtualHost> containers you may define later in the file.
#
# All of these directives may appear inside <VirtualHost> containers,
# in which case these default settings will be overridden for the
# virtual host being defined.
#
#
# ServerAdmin: Your address, where problems with the server should be
# e-mailed. This address appears on some server-generated pages, such
# as error documents. e.g. [email protected]
#
ServerAdmin [email protected]
#
# ServerName gives the name and port that the server uses to identify itself.
# This can often be determined automatically, but we recommend you specify
# it explicitly to prevent problems during startup.
#
# If your host doesn't have a registered DNS name, enter its IP address here.
#
ServerName localhost:80
#
# Deny access to the entirety of your server's filesystem. You must
# explicitly permit access to web content directories in other
# <Directory> blocks below.
#
<Directory />
AllowOverride none
Require all denied
</Directory>
#
# Note that from this point forward you must specifically allow
# particular features to be enabled - so if something's not working as
# you might expect, make sure that you have specifically enabled it
# below.
#
#
# DocumentRoot: The directory out of which you will serve your
# documents. By default, all requests are taken from this directory, but
# symbolic links and aliases may be used to point to other locations.
#
DocumentRoot "/srv/http"
SetHandler application/x-httpd-php
<Directory "/srv/http">
#
# Possible values for the Options directive are "None", "All",
# or any combination of:
# Indexes Includes FollowSymLinks SymLinksifOwnerMatch ExecCGI MultiViews
#
# Note that "MultiViews" must be named *explicitly* --- "Options All"
# doesn't give it to you.
#
# The Options directive is both complicated and important. Please see
# http://httpd.apache.org/docs/2.4/mod/core.html#options
# for more information.
#
#Options Indexes FollowSymLinks
AddHandler type-map var
Order allow,deny
Allow from all
LanguagePriority en de
ForceLanguagePriority Prefer Fallback
Options Indexes FollowSymLinks ExecCGI
#Options Indexes
AddHandler cgi-script *.py
#SetHandler application/x-httpd-php
#
# AllowOverride controls what directives may be placed in .htaccess files.
# It can be "All", "None", or any combination of the keywords:
# AllowOverride FileInfo AuthConfig Limit
#
AllowOverride None
#
# Controls who can get stuff from this server.
#
Require all granted
</Directory>
<Directory "/srv/http/php" >
Options Indexes FollowSymLinks ExecCGI
AddHandler cgi-script *.py
AllowOverride All
Require all granted
#SetHandler application/x-httpd-php
</Directory>
#
# DirectoryIndex: sets the file that Apache will serve if a directory
# is requested.
#
<IfModule dir_module>
DirectoryIndex index.html
</IfModule>
#
# The following lines prevent .htaccess and .htpasswd files from being
# viewed by Web clients.
#
<Files ".ht*">
Require all denied
</Files>
#
# ErrorLog: The location of the error log file.
# If you do not specify an ErrorLog directive within a <VirtualHost>
# container, error messages relating to that virtual host will be
# logged here. If you *do* define an error logfile for a <VirtualHost>
# container, that host's errors will be logged there and not here.
#
ErrorLog "/var/log/httpd/error_log"
#
# LogLevel: Control the number of messages logged to the error_log.
# Possible values include: debug, info, notice, warn, error, crit,
# alert, emerg.
#
#LogLevel warn
LogLevel debug
AddType text/html err
AddOutputFilter Includes err
<IfModule log_config_module>
#
# The following directives define some format nicknames for use with
# a CustomLog directive (see below).
#
LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
LogFormat "%h %l %u %t \"%r\" %>s %b" common
<IfModule logio_module>
# You need to enable mod_logio.c to use %I and %O
LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio
</IfModule>
#
# The location and format of the access logfile (Common Logfile Format).
# If you do not define any access logfiles within a <VirtualHost>
# container, they will be logged here. Contrariwise, if you *do*
# define per-<VirtualHost> access logfiles, transactions will be
# logged therein and *not* in this file.
#
CustomLog "/var/log/httpd/access_log" common
#
# If you prefer a logfile with access, agent, and referer information
# (Combined Logfile Format) you can use the following directive.
#
#CustomLog "/var/log/httpd/access_log" combined
</IfModule>
<IfModule alias_module>
#
# Redirect: Allows you to tell clients about documents that used to
# exist in your server's namespace, but do not anymore. The client
# will make a new request for the document at its new location.
# Example:
# Redirect permanent /foo http://www.example.com/bar
#
# Alias: Maps web paths into filesystem paths and is used to
# access content that does not live under the DocumentRoot.
# Example:
# Alias /webpath /full/filesystem/path
#
# If you include a trailing / on /webpath then the server will
# require it to be present in the URL. You will also likely
# need to provide a <Directory> section to allow access to
# the filesystem path.
#
# ScriptAlias: This controls which directories contain server scripts.
# /ScriptAliases are essentially the same as Aliases, except that
# documents in the target directory are treated as applications and
# run by the server when requested rather than as documents sent to the
# client. The same rules about trailing "/" apply to ScriptAlias
# directives as to Alias.
#
Alias /cgi-bin/ "/srv/http/cgi-bin/"
<Directory /srv/http/cgi-bin/>
AddHandler cgi-script cgi py
Options ExecCGI
</Directory>
</IfModule>
<IfModule cgid_module>
#
# ScriptSock: On threaded servers, designate the path to the UNIX
# socket used to communicate with the CGI daemon of mod_cgid.
#
#Scriptsock cgisock
</IfModule>
#
# "/srv/http/cgi-bin" should be changed to whatever your ScriptAliased
# CGI directory exists, if you have that configured.
#
#<Directory "/srv/http/cgi-bin">
# AllowOverride None
# Options ExecCGI
# AddHandler cgi-script py
# Require all granted
#</Directory>
<IfModule mime_module>
#
# TypesConfig points to the file containing the list of mappings from
# filename extension to MIME-type.
#
TypesConfig conf/mime.types
#
# AddType allows you to add to or override the MIME configuration
# file specified in TypesConfig for specific file types.
#
#AddType application/x-gzip .tgz
#
# AddEncoding allows you to have certain browsers uncompress
# information on the fly. Note: Not all browsers support this.
#
#AddEncoding x-compress .Z
#AddEncoding x-gzip .gz .tgz
#
# If the AddEncoding directives above are commented-out, then you
# probably should define those extensions to indicate media types:
#
AddType application/x-compress .Z
AddType application/x-gzip .gz .tgz
#
# AddHandler allows you to map certain file extensions to "handlers":
# actions unrelated to filetype. These can be either built into the server
# or added with the Action directive (see below)
#
# To use CGI scripts outside of ScriptAliased directories:
# (You will also need to add "ExecCGI" to the "Options" directive.)
#
#AddHandler cgi-script .cgi
# For type maps (negotiated resources):
#AddHandler type-map var
#
# Filters allow you to process content before it is sent to the client.
#
# To parse .shtml files for server-side includes (SSI):
# (You will also need to add "Includes" to the "Options" directive.)
#
#AddType text/html .shtml
#AddOutputFilter INCLUDES .shtml
</IfModule>
#
# The mod_mime_magic module allows the server to use various hints from the
# contents of the file itself to determine its type. The MIMEMagicFile
# directive tells the module where the hint definitions are located.
#
#MIMEMagicFile conf/magic
#
# Customizable error responses come in three flavors:
# 1) plain text 2) local redirects 3) external redirects
#
# Some examples:
#ErrorDocument 500 "The server made a boo boo."
#ErrorDocument 404 /missing.html
#ErrorDocument 404 "/cgi-bin/missing_handler.pl"
#ErrorDocument 402 http://www.example.com/subscription_info.html
#
#
# MaxRanges: Maximum number of Ranges in a request before
# returning the entire resource, or one of the special
# values 'default', 'none' or 'unlimited'.
# Default setting is to accept 200 Ranges.
#MaxRanges unlimited
#
# EnableMMAP and EnableSendfile: On systems that support it,
# memory-mapping or the sendfile syscall may be used to deliver
# files. This usually improves server performance, but must
# be turned off when serving from networked-mounted
# filesystems or if support for these functions is otherwise
# broken on your system.
# Defaults: EnableMMAP On, EnableSendfile Off
#
#EnableMMAP off
#EnableSendfile on
# Supplemental configuration
#
# The configuration files in the conf/extra/ directory can be
# included to add extra features or to modify the default configuration of
# the server, or you may simply copy their contents here and change as
# necessary.
# Server-pool management (MPM specific)
Include conf/extra/httpd-mpm.conf
# Multi-language error messages
Include conf/extra/httpd-multilang-errordoc.conf
# Fancy directory listings
Include conf/extra/httpd-autoindex.conf
# Language settings
Include conf/extra/httpd-languages.conf
# User home directories
Include conf/extra/httpd-userdir.conf
# Real-time info on requests and configuration
#Include conf/extra/httpd-info.conf
# Virtual hosts
Include conf/extra/httpd-vhosts.conf
# Local access to the Apache HTTP Server Manual
#Include conf/extra/httpd-manual.conf
# Distributed authoring and versioning (WebDAV)
#Include conf/extra/httpd-dav.conf
# Various default settings
Include conf/extra/httpd-default.conf
# Configure mod_proxy_html to understand HTML4/XHTML1
<IfModule proxy_html_module>
Include conf/extra/proxy-html.conf
# config for php7
Include conf/extra/php7_module.conf
</IfModule>
# Secure (SSL/TLS) connections
#Include conf/extra/httpd-ssl.conf
#
# Note: The following must must be present to support
# starting without SSL on platforms with no /dev/random equivalent
# but a statically compiled-in mod_ssl.
#
<IfModule ssl_module>
SSLRandomSeed startup builtin
SSLRandomSeed connect builtin
</IfModule>
There is this good note on https://serverfault.com/questions/107467/attempt-to-invoke-directory-as-script which advises to change ScriptAlias to Alias which I like the OP did wrong. So as you see from my config-file I corrected this.
I didn't find any other useful information on Google.
So any suggestions for what I need to change further?
|
Two things:
You need a <FilesMatch> around the SetHandler directive:
<FilesMatch \.php$>
SetHandler application/x-httpd-php
</FilesMatch>
It appears you can make just about anything into a "PHP file" with the <FilesMatch> directive.
After that, I don't think you need an asterisk on the AddHander cgi directive:
AddHandler cgi-script .py
| "attempt to invoke directory '/srv/http' as script" error with "SetHandler application/x-httpd-php" enabled in apache config on archlinux |
1,463,327,505,000 |
I am using CentOs-6.7 (Installed Java latest version) and downloaded the solr-5.3.2.tgz from http://a.mbbsindia.com/lucene/solr/5.3.2/. Then i unziped to my directory
/opt/solr-5.3.2 and then i started solr using the command
bin/solr start -e cloud -noprompt
After that solr started fine without any issue and then i checked with port are listening fine and established.
Below my print after starting the solr.
Welcome to the SolrCloud example!
Starting up 2 Solr nodes for your example SolrCloud cluster.
Creating Solr home directory /opt/solr-5.3.2/example/cloud/node1/solr
Cloning /opt/solr-5.3.2/example/cloud/node1 into
/opt/solr-5.3.2/example/cloud/node2
Starting up Solr on port 8983 using command:
bin/solr start -cloud -p 8983 -s "example/cloud/node1/solr"
Waiting up to 30 seconds to see Solr running on port 8983 [\]
Started Solr server on port 8983 (pid=8560). Happy searching!
Starting up Solr on port 7574 using command:
bin/solr start -cloud -p 7574 -s "example/cloud/node2/solr" -z localhost:9983
Waiting up to 30 seconds to see Solr running on port 7574 [\]
Started Solr server on port 7574 (pid=8776). Happy searching!
Connecting to ZooKeeper at localhost:9983 ...
Uploading /opt/solr-5.3.2/server/solr/configsets/data_driven_schema_configs/conf for config gettingstarted to ZooKeeper at localhost:9983
Creating new collection 'gettingstarted' using command:
http://localhost:8983/solr/admin/collections?action=CREATE&name=gettingstarted&numShards=2&replicationFactor=2&maxShardsPerNode=2&collection.configName=gettingstarted
{
"responseHeader":{
"status":0,
"QTime":22130},
"success":{"":{
"responseHeader":{
"status":0,
"QTime":20693},
"core":"gettingstarted_shard2_replica2"}}}
Enabling auto soft-commits with maxTime 3 secs using the Config API
POSTing request to Config API: http://localhost:8983/solr/gettingstarted/config
{"set-property":{"updateHandler.autoSoftCommit.maxTime":"3000"}}
Successfully set-property updateHandler.autoSoftCommit.maxTime to 3000
SolrCloud example running, please visit: http://localhost:8983/solr
Now the problem is when i tried to open the solr console using http://localhost:8983/solr or http://localhost:8983, am getting the error
"SolrCore Initialization Failures"
and also getting connection lost.
Note: Checklist
Port are listening
cores are created using command
Restarted many times
|
You need to add the IP address in the firewall (e.g. iptables in CentOS machines). Once you configure the IP in that table, you are able to access the machine.
| Solr Running fine but am getting "SolrCore Initialization Failures" when i tried to open my console "localhost:8983" |
1,463,327,505,000 |
Will VyOS still work right if I install Shorewall-lite on it? Shorewall seems like an easier way to setup a network, but VyOS seems pretty great for day-to-day management of a router / firewall. So I was wondering if they are compatible. From what I understand about Shorewall, it just generates a bunch of iptables rules, and VyOS is a Linux dirstro with iptables, so it seems to me that it should work, but I thought I would just check if there were any other caveats about why they are not compatible that I should know about before putting the research into setting something like this up.
|
Shorewall-lite is a very light weight tool to manage a firewall configuration generated on another server. It is not suitable for setting up the configuration, but does manage the generated configuration. It runs quite well on OpenWRT, which is a distribution intended for routers.
The current version of Shorewall is Perl based. This can make the installation much larger than you would want for a router.
Of the tools I have used, I find Shorewall the easiest to use. There are example configuration which generally require little modification. I have done fairly complex configuration for a smaller network quite easily.
| Can Shorewall be used on VyOS? |
1,463,327,505,000 |
How can I reliably find the location of httpd.conf?
I am looking for a solution, or if necessary a combination of things that will find the location of httpd.conf quickly and reliably on as many Operating Systems as possible.
Thanks!
|
For me,
apachectl -V
works on both OSX and FreeBSD.
If anybody has a better answer or a continuation of this answer for other operating systems, feel free to share.
| Where is httpd.conf? |
1,463,327,505,000 |
This is my .gitconfig as it stands now :-
$ cat .gitconfig
[user]
name = Shirish Agarwal
email = [email protected]
[credential]
helper = cache --timeout=3600
This is in -
$ pwd
/home/shirish
Obviously I have obfuscated my mail id a bit to prevent spammers etc. from harvesting my mail id here.
But let's say I have another credential for another git site (private though) and I want to have it in the global configuration, both the username and the password so that when I pull from that site it doesn't ask me for the credentials anymore.
I am guessing this is possible, but how ?
|
I don't think that's possible. There are global configuration options and per-repository-options. If you use different email addresses in different repositories, you need to set them on a per-repository basis.
You can configure an individual repo to use a specific user / email address which overrides the global configuration. From the root of the repo, run
git config user.name "Your Name Here"
git config user.email [email protected]
You can see the effects of these settings in the .git/config file .
The default settings are in your ~/.gitconfig and can be set like this:
git config --global user.name "Your Name Here"
git config --global user.email [email protected]
Source
| How to have a global .gitconfig for 2 or more git repos? |
1,463,327,505,000 |
I have an apache2.4 server. On some page, I have a link to a log file. I would like to show the file content in a browser when the link is clicked, rather that serving the file do download.
|
You can try this in .htaccess or apache config.
AddType text/plain .log
| Apache2.4- how to serve file as html |
1,463,327,505,000 |
What config file stores the settings for KDE window settings, sizes, etc.?
|
Some common ones for KDE4 are in ~/.kde4/share/config/kwinrc. But some configs are app-controlled. Many apps have their config files in the same dir.
| KDE config file for window settings, sizes, etc.? |
1,463,327,505,000 |
Using Ubuntu 14.04LTS and having a problem with changing DNS. I can change servers in the /etc/resoolvconf/resolv.conf.d/base and head files and restart resolvconf using sudo resolvconf -u and it will update /etc/resolv.conf with the changes I made. I can then dig for a hostname and it tells me its using the servers I just specified. However, when I run nm-tool it still shows some DNS servers that I do not know where they are coming from. This system is not using DHCP, everything is statically configured, but just in case I went into /etc/dhcp/dhclient.conf and added "prepend domain-name-servers 8.8.8.8" thinking this would manually add 8.8.8.8 on top of the DNS servers I see in nm-tool. After all of these changes I restarted networking and still no luck. How can I force nm-tool to use what I specifiy and how do I find out where these other entries are coming from. ?
|
My /etc/dhcp/dhclient.conf file uses the following configuration, notice the supercede line
# Configuration file for /sbin/dhclient, which is included in Debian's
# dhcp3-client package.
#
# This is a sample configuration file for dhclient. See dhclient.conf's
# man page for more information about the syntax of this file
# and a more comprehensive list of the parameters understood by
# dhclient.
#
# Normally, if the DHCP server provides reasonable information and does
# not leave anything out (like the domain name, for example), then
# few changes must be made to this file, if any.
#
option rfc3442-classless-static-routes code 121 = array of unsigned integer 8;
#send host-name "andare.fugue.com";
send host-name = gethostname();
#send dhcp-client-identifier 1:0:a0:24:ab:fb:9c;
#send dhcp-lease-time 3600;
supersede domain-name-servers 208.67.222.222,208.67.220.220,8.8.8.8;
# prepend domain-name-servers 208.67.222.222,208.67.220.220;
request subnet-mask, broadcast-address, time-offset, routers,
domain-name, domain-name-servers, domain-search, host-name,
dhcp6.name-servers, dhcp6.domain-search,
netbios-name-servers, netbios-scope, interface-mtu,
rfc3442-classless-static-routes, ntp-servers,
dhcp6.fqdn, dhcp6.sntp-servers;
And here is the output of nm-tool which confirms my dns servers
$ nm-tool | awk '/DNS/ {print $2}'
208.67.222.222
208.67.220.220
8.8.8.8
Perhaps what also helps is that in my /etc/NetworkManager/NetworkManager.conf , I have line dns=dnsmasq commented out, so that Network Manager doesn't use the dnsmasq plug-in
In addition to this method, I've also wrote a script to automate updating dns for each and every connection, which is something that can be used as an alternative, but the idea is still the same - ignore dns provided by dhcp, use your own. Details here: https://unix.stackexchange.com/a/164728/85039
| How to force nm-tool to update its DNS servers |
1,463,327,505,000 |
I have a CentOS 6 server here in our IT dept. and I want to install Nagios on it for some more network and enterprise insight. It has a internal IP that matches the network scheme as well as a hostname "Nagios".
I installed httpd services to get apache and after I restarted the server and try to go look for the test landing page to make sure Apache is up, I get nothing I tried both hostname and IP.
I issued a hostname command and the output was:
[root@Nagios ~]# hostname
Nagios.Nagios
my /etc/hosts is set up as shown below
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
And my /etc/sysconfig/network looks as shown:
NETWORKING=yes
NETWORKING_IPV6=no
HOSTNAME=Nagios
GATEWAY=172.16.22.249
So when I put my IP in it doesn't resolve. I have restarted the httpd services and still nothing. What could the issue be?
UPDATE
Per the suggestion I have changed my hosts file to map:
This is now my host file:
127.0.0.1 localhost.localdomain localhost
127.0.0.1 nagios.localhost.com
172.16.22.249 nagios.localhost.com
nagios.localhost.com 172.16.22.249
172.16.22.249 localhost nagios.localhost.com
::1 localhost6.localdomain6 localhost6
|
k, so in /etc/sysconfig/network, you want to change
HOSTNAME=Nagios
to be
HOSTNAME=Nagios.yr.domain.name
(reboot)
and then in /etc/hosts, you want to remove the line
nagios.localhost.com 172.16.22.249
and change the line
172.16.22.249 nagios.localhost.com
to read
172.16.22.249 nagios nagios.ur.domain.name
(mind the space between hostname and fqdn)
lastly, you want to remove the line
127.0.0.1 nagios.localhost.com
... alternatively, you could append "nagios" to the first line so as to read:
127.0.0.1 localhost.localdomain localhost NAGIOS
EDIT: DNS is case insensative
| CentOS server to be used to internally issues |
1,463,327,505,000 |
I have Raspberry Pi B+ with Arch Linux installed and at boot time, apache web server does not start:
[xxx@rpi ~]# systemctl status -l httpd
* httpd.service - Apache Web Server
Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Thu 1970-01-01 01:00:23 CET; 45 years 0 months ago
Process: 177 ExecStart=/usr/bin/apachectl start (code=exited, status=134)
Jan 01 01:00:23 rpi apachectl[177]: Assertion 'canonical' failed at src/nss-myhostname/nss-myhostname.c:204, function fill_in_hostent(). Aborting.
Jan 01 01:00:23 rpi apachectl[177]: /usr/bin/apachectl: line 79: 185 Aborted (core dumped) $HTTPD -k $ARGV
Jan 01 01:00:23 rpi systemd[1]: httpd.service: control process exited, code=exited status=134
Jan 01 01:00:23 rpi systemd[1]: Failed to start Apache Web Server.
Jan 01 01:00:23 rpi systemd[1]: Unit httpd.service entered failed state.
Jan 01 01:00:23 rpi systemd[1]: httpd.service failed.
Jan 01 01:00:24 rpi systemd-coredump[208]: Process 185 (httpd) of user 0 dumped core.
[xxx@rpi ~]#
However, if restart apache from ssh terminal (after system restart and ssh login), the apache is ran properly:
[xxx@rpi ~]# systemctl restart httpd && systemctl status -l httpd
* httpd.service - Apache Web Server
Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2015-01-05 02:48:46 CET; 336ms ago
Process: 420 ExecStart=/usr/bin/apachectl start (code=exited, status=0/SUCCESS)
Main PID: 424 (httpd)
CGroup: /system.slice/httpd.service
|-424 /usr/bin/httpd -k start
|-426 /usr/bin/httpd -k start
|-427 /usr/bin/httpd -k start
|-428 /usr/bin/httpd -k start
|-429 /usr/bin/httpd -k start
`-430 /usr/bin/httpd -k start
Jan 05 02:48:44 rpi apachectl[420]: AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 192.168.0.154. Set the 'ServerName' directive globally to suppress this message
Jan 05 02:48:45 rpi systemd[1]: PID file /run/httpd/httpd.pid not readable (yet?) after start.
Jan 05 02:48:46 rpi systemd[1]: Started Apache Web Server.
[xxx@rpi ~]#
What is going on (running Linux octopustest 3.12.35-1-ARCH #1 PREEMPT Tue Dec 23 07:14:51 MST 2014 armv6l GNU/Linux kernel and Apache/2.4.10)?
|
There's a pretty simple fix to this that worked for me. It requires editing the /etc/hosts file, which is always to be avoided if possible, but it fixed all problems that I have. This error will occur if anything tries to access the internet and get back information, including Apache, Eclipse, and other applications. This sometimes is caused by a bug in the applications themselves, but could also be a local problem, which is caused when your /etc/hosts file is missing some element that the program requires.
Note that you will need admin access (or to be a part of the sudoers group) to complete some of these steps.
First, open the hosts file using the following command: sudo nano /etc/hosts. Your hosts file should resemble the following, which is a direct copy of mine:
#
# /etc/hosts: static lookup table for host names
#
#<ip-address> <hostname.domain.org> <hostname>
127.0.0.1 localhost.localdomain localhost BEN-PC-ARCH
::1 localhost.localdomain localhost BEN-PC-ARCH
# End of file
Note that in addition to the reflection of 127.0.0.1 to localhost.localdomain, there is a second line that does the same. Adding this to the /etc/hosts file fixes the problem, for whatever reason.
| arch linux apache start at boot crash |
1,463,327,505,000 |
On httpd 2.4
alias /repo /repos
<Location /repos>
Require ip 192.168.0.7
</Location>
I can reach repos but not repo (403 forbidden). If I use directory instead of location nothing changes.
Why?
|
Solution found.
Need options indexes
correct syntax
alias /repo /var/www/htdocs/repos
<Location /repos>
Require ip 192.168.0.7
</Location>
<Location /repo>
options indexes
Require ip 192.168.0.7
</Location>
| HTTPD/apache 2.4: strange problem with alias |
1,463,327,505,000 |
I have installed MySQL 5.1.73-community edition in my Oracle LINUX 5 server. I am able to start / stop and execute MySQL commands. But, I am unable to locate the MySQL installation directory as well as my.cnf file where I can change the configuration as per my requirements.
|
#locate my.cnf
This command will help you to find the path of the file.
| MySQL 5.1.73 - community edition : Where is MySQL configuration file? |
1,463,327,505,000 |
I 'm trying to configure kernel via make menuconfig [Angstrom distribution] on BeagleBoard-xM, but I get these errors:
make: Warning: File `/usr/src/linux-2.6.32.61/arch/arm/Makefile' has modification time 11647 s in the future
make[1]: Warning: File `scripts/Makefile.host' has modification time 11529 s in the future
HOSTCC scripts/basic/fixdep
gcc: error trying to exec 'cc1': execvp: No such file or directory
make[1]: *** [scripts/basic/fixdep] Error 1
make: *** [scripts_basic] Error 2
As far I know, this is because cc1 is not in the PATH.
I have no Linux experience and I can't figure out what my next steps should be. Any help would really be appreciated.
|
Problem solved after installing cpp: opkg install cpp cpp-symlinks.
| cc1 not in the PATH |
1,463,327,505,000 |
I need a script that detects the computer joined the same home network and standardize its folder structure. For example If there's A, B and C computers in a network which are configured to have a standard folder structure and connected to a NAS to store the downloaded files. It does not matter I downloaded a file from computer A or B or C. All newly downloaded files will be moved in some predefined special folder in the NAS. And if a 4th computer like computer D gets added to the same network, it will be configured like the 3 others in the same manner automatically by means of a script. How can I write such a script ?
|
At this point I would start looking at configuration management software, such as e.g. puppet, cfengine or chef. These programs are built to solve that problem even in large server parks. They may be a bit of overkill when you're only using it for your home environment, but if you're interested in unix from a programming and/or sysadmin perspective, knowledge of them will be very useful for you in the long run.
| Prepare a computer for being firstly used in my home network with a BASH script is possible? |
1,463,327,505,000 |
I have a following setup:
1 postfix server: a.example.com that needs to accept all emails for any subdomain on example.com (*@*.example.com) and delivers to mailman account and also send emails to any email account (gmail, yahoo, etc) including *@example.com.
1 hosted exchange: exch11.hosted.com for example.com emails (*@example.com).
Everything works in this setup except sending emails from a.example.com to *@example.com (exch11.hosted.com).
If I have example.com in mydomains.db file, then a.example.com does not send out *@example.com emails and delivers locally. if I change it to *.example.com then it sends *@example.com emails to exch11.hosted.com but now does not accept *@subdomain.example.com emails and shows an error that Relay is not allowed (it should not be relaying and delivering to local maildir account).
Main requirement is to have a.example.com accept mail for any subdomain and deliver emails for main domain to exch11.hosted.com. Can anyone please help me or point me towards right direction?
Any help is welcome.
Thanks.
main.cf:
command_directory = /usr/sbin
daemon_directory = /usr/libexec/postfix
mydestination = hash:/etc/postfix/mydomains
unknown_local_recipient_reject_code = 550
alias_maps = hash:/etc/aliases
home_mailbox = Maildir/
smtpd_banner = mail.example.com
debug_peer_level = 2
debugger_command =
PATH=/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin
xxgdb $daemon_directory/$process_name $process_id & sleep 5
sendmail_path = /usr/sbin/sendmail.postfix
newaliases_path = /usr/bin/newaliases.postfix
mailq_path = /usr/bin/mailq.postfix
setgid_group = postdrop
html_directory = no
manpage_directory = /usr/share/man
sample_directory = /usr/share/doc/postfix-2.3.3/samples
readme_directory = /usr/share/doc/postfix-2.3.3/README_FILES
virtual_alias_maps = hash:/etc/postfix/virtual, pcre:/etc/postfix/virtual.pcre
smtpd_sasl_auth_enable = yes
smtpd_sasl_security_options = noanonymous
smtpd_sasl_local_domain = $myhostname
smtp_sasl_security_options = noplaintext
#smtpd_sender_restrictions = check_sender_access hash:/etc/postfix/sender-access
smtpd_recipient_restrictions = check_recipient_access hash:/etc/postfix/inbound-access,permit_sasl_authenticated, permit_mynetworks, reject_unauth_destination
mailbox_size_limit = 25600000
transport_maps = hash:/etc/postfix/transport
message_size_limit = 20240000
virtual.pcre and virtual:
/(.*)@[^.]*\.example\.com$/ mailman
transport:
# demo
.demo.example.com smtp:192.168.100.161:25
# demo mail
demo.example.com smtp:192.168.100.161
# Demo2
.demo2.example.com smtp:192.168.100.221:25
# demo2 domain
demo2.example.com smtp:192.168.100.221
mydomains:
localhost OK
mail.local OK
example.com OK
|
First of all i am not sure if this will work but i hope it will help get you started:
Remove example.com from mydomains as this postfix instance does not handle the mail for it directly.
Add virtual_alias_domains = .example.com this should solve your subdomain issue
Add relay_domains = example.com and specify an explicit transport for example.com, e.g: example.com :[exch11.hosted.com]
| Postfix Configuration - different servers for subdomains and domain |
1,463,327,505,000 |
after attempting sudo apt upgrade on a fresh install of parrot os, new kernel update is present but trying to boot into it from grub gives gives me a kernel panic.
here is what i tried and with what errors ive been getting:
The following partially installed packages will be configured:
apache2 linux-headers-6.5.0-13parrot1-amd64 linux-headers-amd64
linux-image-6.5.0-13parrot1-amd64 linux-image-amd64
No packages will be installed, upgraded, or removed.
0 packages upgraded, 0 newly installed, 0 to remove and 3 not upgraded.
Need to get 0 B of archives. After unpacking 0 B will be used.
Setting up linux-headers-6.5.0-13parrot1-amd64 (6.5.13-1parrot1) ...
/etc/kernel/header_postinst.d/dkms:
dkms: running auto installation service for kernel 6.5.0-13parrot1-amd64.
Sign command: /lib/modules/6.5.0-13parrot1-amd64/build/scripts/sign-file
Signing key: /var/lib/dkms/mok.key
Public certificate (MOK): /var/lib/dkms/mok.pub
Building module:
Cleaning build area...
'make' -j8 KVER=6.5.0-13parrot1-amd64 KSRC=/lib/modules/6.5.0-13parrot1-amd64/build........(bad exit status: 2)
Error! Bad return status for module build on kernel: 6.5.0-13parrot1-amd64 (x86_64)
Consult /var/lib/dkms/realtek-rtl8188eus/5.3.9~git20230101.f8ead57/build/make.log for more information.
Error! One or more modules failed to install during autoinstall.
Refer to previous errors for more information.
dkms: autoinstall for kernel: 6.5.0-13parrot1-amd64 failed!
run-parts: /etc/kernel/header_postinst.d/dkms exited with return code 11
Failed to process /etc/kernel/header_postinst.d at /var/lib/dpkg/info/linux-headers-6.5.0-13parrot1-amd64.postinst line 11.
dpkg: error processing package linux-headers-6.5.0-13parrot1-amd64 (--configure):
installed linux-headers-6.5.0-13parrot1-amd64 package post-installation script subprocess returned error exit status 1
dpkg: dependency problems prevent configuration of linux-headers-amd64:
linux-headers-amd64 depends on linux-headers-6.5.0-13parrot1-amd64 (= 6.5.13-1parrot1); however:
Package linux-headers-6.5.0-13parrot1-amd64 is not configured yet.
dpkg: error processing package linux-headers-amd64 (--configure):
dependency problems - leaving unconfigured
Setting up linux-image-6.5.0-13parrot1-amd64 (6.5.13-1parrot1) ...
/etc/kernel/postinst.d/dkms:
dkms: running auto installation service for kernel 6.5.0-13parrot1-amd64.
Sign command: /lib/modules/6.5.0-13parrot1-amd64/build/scripts/sign-file
Signing key: /var/lib/dkms/mok.key
Public certificate (MOK): /var/lib/dkms/mok.pub
Building module:
Cleaning build area...
'make' -j8 KVER=6.5.0-13parrot1-amd64 KSRC=/lib/modules/6.5.0-13parrot1-amd64/build.........(bad exit status: 2)
Error! Bad return status for module build on kernel: 6.5.0-13parrot1-amd64 (x86_64)
Consult /var/lib/dkms/realtek-rtl8188eus/5.3.9~git20230101.f8ead57/build/make.log for more information.
Error! One or more modules failed to install during autoinstall.
Refer to previous errors for more information.
dkms: autoinstall for kernel: 6.5.0-13parrot1-amd64 failed!
run-parts: /etc/kernel/postinst.d/dkms exited with return code 11
dpkg: error processing package linux-image-6.5.0-13parrot1-amd64 (--configure):
installed linux-image-6.5.0-13parrot1-amd64 package post-installation script subprocess returned error exit status 1
Setting up apache2 (2.4.57-2) ...
info: Switch to mpm prefork for package libapache2-mod-php8.2: No action required
info: Executing deferred 'a2enmod php8.2' for package libapache2-mod-php8.2
Can't locate if.pm in @INC (you may need to install the if module) (@INC contains: /etc/perl /usr/local/lib/x86_64-linux-gnu/perl/5.36.0 /usr/local/share/perl/5.36.0 /usr/lib/x86_64-linux-gnu/perl5/5.36 /usr/share/perl5 /usr/lib/x86_64-linux-gnu/perl-base /usr/lib/x86_64-linux-gnu/perl/5.36 /usr/share/perl/5.36 /usr/local/lib/site_perl) at /usr/sbin/a2enmod line 15.
BEGIN failed--compilation aborted at /usr/sbin/a2enmod line 15.
dpkg: error processing package apache2 (--configure):
installed apache2 package post-installation script subprocess returned error exit status 2
dpkg: dependency problems prevent configuration of linux-image-amd64:
linux-image-amd64 depends on linux-image-6.5.0-13parrot1-amd64 (= 6.5.13-1parrot1); however:
Package linux-image-6.5.0-13parrot1-amd64 is not configured yet.
dpkg: error processing package linux-image-amd64 (--configure):
dependency problems - leaving unconfigured
Errors were encountered while processing:
linux-headers-6.5.0-13parrot1-amd64
linux-headers-amd64
linux-image-6.5.0-13parrot1-amd64
apache2
linux-image-amd64
Scanning application launchers
Removing duplicate launchers or broken launchers
Launchers are updated
E: Sub-process /usr/bin/dpkg returned an error code (1)
Setting up linux-headers-6.5.0-13parrot1-amd64 (6.5.13-1parrot1) ...
/etc/kernel/header_postinst.d/dkms:
dkms: running auto installation service for kernel 6.5.0-13parrot1-amd64.
/usr/sbin/dkms: line 2497: echo: write error: Broken pipe
Sign command: /lib/modules/6.5.0-13parrot1-amd64/build/scripts/sign-file
Signing key: /var/lib/dkms/mok.key
Public certificate (MOK): /var/lib/dkms/mok.pub
Building module:
Cleaning build area...
'make' -j8 KVER=6.5.0-13parrot1-amd64 KSRC=/lib/modules/6.5.0-13parrot1-amd64/build.........(bad exit status: 2)
Error! Bad return status for module build on kernel: 6.5.0-13parrot1-amd64 (x86_64)
Consult /var/lib/dkms/realtek-rtl8188eus/5.3.9~git20230101.f8ead57/build/make.log for more information.
Error! One or more modules failed to install during autoinstall.
Refer to previous errors for more information.
dkms: autoinstall for kernel: 6.5.0-13parrot1-amd64 failed!
run-parts: /etc/kernel/header_postinst.d/dkms exited with return code 11
Failed to process /etc/kernel/header_postinst.d at /var/lib/dpkg/info/linux-headers-6.5.0-13parrot1-amd64.postinst line 11.
dpkg: error processing package linux-headers-6.5.0-13parrot1-amd64 (--configure):
installed linux-headers-6.5.0-13parrot1-amd64 package post-installation script subprocess returned error exit status 1
dpkg: dependency problems prevent configuration of linux-headers-amd64:
linux-headers-amd64 depends on linux-headers-6.5.0-13parrot1-amd64 (= 6.5.13-1parrot1); however:
Package linux-headers-6.5.0-13parrot1-amd64 is not configured yet.
dpkg: error processing package linux-headers-amd64 (--configure):
dependency problems - leaving unconfigured
Setting up linux-image-6.5.0-13parrot1-amd64 (6.5.13-1parrot1) ...
/etc/kernel/postinst.d/dkms:
dkms: running auto installation service for kernel 6.5.0-13parrot1-amd64.
/usr/sbin/dkms: line 2497: echo: write error: Broken pipe
Sign command: /lib/modules/6.5.0-13parrot1-amd64/build/scripts/sign-file
Signing key: /var/lib/dkms/mok.key
Public certificate (MOK): /var/lib/dkms/mok.pub
Building module:
Cleaning build area...
'make' -j8 KVER=6.5.0-13parrot1-amd64 KSRC=/lib/modules/6.5.0-13parrot1-amd64/build........(bad exit status: 2)
Error! Bad return status for module build on kernel: 6.5.0-13parrot1-amd64 (x86_64)
Consult /var/lib/dkms/realtek-rtl8188eus/5.3.9~git20230101.f8ead57/build/make.log for more information.
Error! One or more modules failed to install during autoinstall.
Refer to previous errors for more information.
dkms: autoinstall for kernel: 6.5.0-13parrot1-amd64 failed!
run-parts: /etc/kernel/postinst.d/dkms exited with return code 11
dpkg: error processing package linux-image-6.5.0-13parrot1-amd64 (--configure):
installed linux-image-6.5.0-13parrot1-amd64 package post-installation script subprocess returned error exit status 1
Setting up apache2 (2.4.57-2) ...
info: Switch to mpm prefork for package libapache2-mod-php8.2: No action required
info: Executing deferred 'a2enmod php8.2' for package libapache2-mod-php8.2
Can't locate if.pm in @INC (you may need to install the if module) (@INC contains: /etc/perl /usr/local/lib/x86_64-linux-gnu/perl/5.36.0 /usr/local/share/perl/5.36.0 /usr/lib/x86_64-linux-gnu/perl5/5.36 /usr/share/perl5 /usr/lib/x86_64-linux-gnu/perl-base /usr/lib/x86_64-linux-gnu/perl/5.36 /usr/share/perl/5.36 /usr/local/lib/site_perl) at /usr/sbin/a2enmod line 15.
BEGIN failed--compilation aborted at /usr/sbin/a2enmod line 15.
dpkg: error processing package apache2 (--configure):
installed apache2 package post-installation script subprocess returned error exit status 2
dpkg: dependency problems prevent configuration of linux-image-amd64:
linux-image-amd64 depends on linux-image-6.5.0-13parrot1-amd64 (= 6.5.13-1parrot1); however:
Package linux-image-6.5.0-13parrot1-amd64 is not configured yet.
dpkg: error processing package linux-image-amd64 (--configure):
dependency problems - leaving unconfigured
Errors were encountered while processing:
linux-headers-6.5.0-13parrot1-amd64
linux-headers-amd64
linux-image-6.5.0-13parrot1-amd64
apache2
linux-image-amd64
im sure the error lies in the last lines:
Errors were encountered while processing:
linux-headers-6.5.0-13parrot1-amd64
linux-headers-amd64
linux-image-6.5.0-13parrot1-amd64
apache2
linux-image-amd64
i dont know what to do from here, sudo dpkg --configure -a & sudo apt --fix-broken install all result in the same issue, might be a configuration problem.
if you need any extra information let me know and thanks for your time
|
Non-configured linux-image and linux-headers are solved by:
Booting the previous kernel
Removing dkms
Upgrading the kernel
Reinstalling dkms
Building the rtl8188eus
To solve the following error:
dpkg: error processing package apache2 (--configure):
installed apache2 package post-installation script subprocess returned error exit status 2
Run the following commands:
sudo apt clean
sudo mv /var/lib/dpkg/info/apache2 /tmp/
sudo dpkg --remove --force-remove-reinstreq apache2
To solve :
W: Possible missing firmware /lib/firmware/*
Here is a detailed answer on U&L.
| kernel panic after after parrotOS upgrade |
1,463,327,505,000 |
I downloaded vsftpd-3.0.5.tar.gz and successfully compiled it to generate vsftpd on computer A. Then I copied vsftpd to another computer B and created a new file called vsftpd.conf on computer A. Then, I started vsftpd and could hear the service start through the command netstat -tulnp | grep 21. However, at this point, I was unable to access vsftpd and prompted the following error. Would vstftpd not default to using the system username and password?Must a new username and password be added?
--
I see that vsftpd distinguishes between anonymous users, local users, and virtual users. I think my problem should be that I want to log in with a local user login name on another computer C, is that not feasible? Is it just a problem with my vstftpd.conf configuration?
error:
C:\Users\guoya>ftp 192.168.5.2
连接到 192.168.5.2。
500 OOPS: cannot locate user entry:nobody
远程主机关闭连接。
cat /etc/vsftpd/vsftpd.conf
anonymous_enable=NO
anon_upload_enable=NO
anon_mkdir_write_enable=NO
anon_other_write_enable=No
anon_world_readable_only=NO
listen=YES
write_enable=YES
local_enable=YES
local_root=/home/tftpShare
local_umask=022
chroot_local_user=YES
chroot_list_enable=NO
start vsftpd:
# ./vsftpd /etc/vsftpd/vsftpd.conf
netstat -tulnp | grep 21
tcp 0 0 0.0.0.0:21 0.0.0.0:* LISTEN 553/vsftpd
|
My problem has been superficially solved by me, but I am still confused. I just want to log in with the username root or other newly added username. Why do I have to execute adduser nobody before adding a new username?
# adduser nobody
Changing password for nobody
New password:
Bad password: too weak
Retype password:
passwd: password for nobody changed by root
| Would vsftpd not default to using the system username and password? |
1,694,680,815,000 |
My PC is running EndeavourOS Linux with KDE Plasma as the DE.
I have recently discovered the concept of a compose key while skimming through the colemak website. In the section where it's being introduced the author mentions:
You can edit the tables and add pretty much anything though!
While after a bit of searching through the system settings, I have been able to set a key as my compose key, I have yet to find this fabled compose table where I can add anything I'd like for the compose key to translate and web searches have yielded no results so far.
I'd appreciate if anybody could point me to where this file is located, thank you.
|
Thank you all very much for the plentiful advice...
Found it out myself.
For future reference in case anyone else has the same issue, check this out:
https://userbase.kde.org/Tutorials/ComposeKey#Optional_Tweaking_of_XCompose_Map
In summary, or in case that guide is ever unavailable:
Create the file ~/.XCompose
Paste this into it:
# ~/.XCompose
# This file defines custom Compose sequences for Unicode characters
# Import default rules from the system Compose file:
include "/usr/share/X11/locale/en_US.UTF-8/Compose"
# To put some stuff onto compose key strokes:
<Multi_key> <minus> <greater> : "→" U2192 # Compose - >
<Multi_key> <colon> <parenright> : "☺" U263A # Compose : )
<Multi_key> <h> <n> <k> : "hugs and kisses" # Compose h n k
<Multi_key> <less> < p> : "< p>" # Compose < p
Log in and log back out to apply settings, no need to restart.
Based on the provided examples, I trust you can figure out how to add your own entries here.
| Changing compose Tables to add new entries? |
1,694,680,815,000 |
switched from windows to linux yesterday after 10yrs
Trying to learn linux and it's damn hard just to do normal things my poor 3 brain cells . I literally can say I'm going insane.
Build: Intel i3 530,DH55TC MB no graphic/sound card. OS Debian 12| PulseAudio 16.1
Mapped my line in ,line out, Mic from pc to 5.1 surround speakers connected via 3 analog cables.I used this guide https://gist.github.com/Brainiarc/8ff198a5ac3f0050f68795233c4866d0. Thank you
All front-left,Front-right,Rear-left,Rear-right all working mapped correctly but my front_center mapped to LFE, LFE vice versa. when I press front-center it's coming out subwoofer and front-center giving static when test.I literally have tried diff channel orders in default.pa & daemon.conf but nothing worked.
What changes I have done till now ?
~/.config/pulse/default.pa #add this line set this as my default sink & source
load-module module-combine-sink channels=6 channel_map=front-left,front-right,rear-left,rear-right,front-center,lfe
~/.config/pulse/daemon.conf #add this line
remixing-produce-lfe = yes
remixing-consume-lfe = yes
lfe-crossover-freq = 80
default-sample-channels = 6
/usr/share/pulseaudio/alsa-mixer/profile-sets/
Mapping analog-surround-51] device-strings = surround51:%f
channel-map = front-left,front-right,rear-left,rear-right,front-center,lfe ##changed only this line##
paths-output = analog-output analog-output-lineout analog-output-speaker
priority = 13
direction = output
My pactl info
Server String: /run/user/1000/pulse/native
Library Protocol Version: 35
Server Protocol Version: 35
Is Local: yes
Client Index: 5
Tile Size: 65472
User Name: yamihero777
Host Name: !NotPC
Server Name: pulseaudio
Server Version: 16.1
Default Sample Specification: s16le 6ch 44100Hz
Default Channel Map: front-left,front-left-of-center,front-center,front-right,front-right-of-center,rear-center
Default Sink: combined
Default Source: combined.monitor
Cookie: ead8:d297
I really don't understand why Default channel map entirely different. I tried to uncomment
Default Channel map in /etc/pulse/daemon.conf but still issue didn't fix and Pactl info showed the same even after a reboot.
In win 11[diff ssd not dual boot] it's working so can't say hwd issue. Kindly help to resolve this
|
Hope this helps for someone.
First make sure copy /etc/pulse/daemon.conf & default.pa files to ~/.config/pulse folder
go to daemon.conf in config/pulse folder, uncomment only default channel map and set it as following
for 5.1: front-left,front-right,rear-left,rear-right,front-center,lfe
for 7.1: front-left,front-right,rear-left,rear-right,front-center,lfe,side-left,side-right
No changes need to be done in default.pa, just leave as it is.
Type in terminal
killall pulseaudio
It will restart. If not, type
pulseaudio --start
Now type in terminal
pactl info
What you see ,that read by system until this corrected nothing will change
default sample channel should be whatever your sound system is
default channel map should match with map we set in daemon.conf file in config folder
type pacmd list-sinks, select which makes senses to you and by using set-default-sink command set it default. Same goes for sources too .
reboot pc and should see you sound system profile (ex: Analog Output 5.1) in pulseaudio control and control center
test in control center. All should be fine. If pactl info shows all good but still facing issues like channels routed wrong you need to change cables on the speaker and test it.
After 1 week of trail and error, my sound system up and I got excellent knowledge. Double win. Cheers!
| Front center mapped as LFE, LFE as front center on my 5.1 Debian 12 |
1,694,680,815,000 |
When solving a problem, requires enabling kernel options e.g. with menuconfig. How do I know which to enable/disable?
|
I normally Google for it. Sometimes I even check the corresponding patches to understand what the option is about. There's no central place or authority which has this information.
The kernel GIT tree, GIT logs and LKML are the primary means/sources of getting information about kernel options/features:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/ (mirror: https://github.com/torvalds/linux )
https://lkml.org/
https://lwn.net/
| How should I know which kernel options to use? |
1,694,680,815,000 |
No matter what I do, my i3 config keeps displaying different colors on active windows. This is what it looks like (notice the lighter sidebar compared to the title bar):
And this is the relevant part of my config file:
set $r "#b60000"
set $w "#f4f6ff"
set $g "#23252e"
set $l "#555555"
set $y "#ffff00"
set $b "#000000"
client.focused $r $r $w $r $r
client.focused_inactive $g $g $l $g $g
client.unfocused $g $g $l $g $g
client.urgent $y $y $b $y $y
client.placeholder $r $r $w $r $r
client.background $w
For some reason (maybe old version, running 4.18.2), I can't set client.focused_tab_title (source), so hopefully the solution is somewhere else.
|
Accidentally discovered the issue is caused by compton's inactive-dim option. Anything higher than zero will cause the issue, so setting inactive-dim = 0.0 solved it.
Perhaps this can be worked around by adding the borders to fade-exclude, but I won't bother with this for now.
| i3: Title bar and border colors not matching |
1,694,680,815,000 |
There are a lot of configuration files in /etc. Some of them are used by installed applications like samba, but is there confs that are directly looked up by the kernel? For example passwd, groups, hostname and so on? And if so which of them are directly for the kernel?
|
A few words about the Linux kernel and its role.
By itself the kernel does literally nothing. I'm not joking. When you load it, it tries to initialize your hardware and then it tries to execute /sbin/init from the root filesystem (something which is mounted at /), that's it. There's a caveat though.
The kernel itself is a large amount of algorithms which provide API (translation of userspace calls into something which the hardware knows what to do with) for applications to work with and even then those applications don't use these APIs directly, they call them via a middle-man such as, e.g. glibc (C language APIs) or Mesa (OpenGL, Vulkan APIs).
Speaking of the caveat earlier.
The kernel can be instructed to do work via e.g. iptables, nftables to process network packets but then the kernel doesn't generate those packets, it either receives them from outside or sends them as a result of running applications requesting to send them. One minor exception is ping which sort of kinda looks like the kernel is doing work but in order for ping to work, userspace has to instruct the kernel how to configure network. When the kernel boots, there are no network protocols at all.
| What configuration files does linux kernel lookup directly in root filesystem? |
1,694,680,815,000 |
I had installed tigervnc for my RHEL 7. After that I forgot the password when was trying to connect via VNC viewer, so I uninstalled the package and removed the /usr/bin/vncpasswd file along with it.
I tried to re-install the package and run the command vncserver. It it keeps on failing saying
vncserver: couldn't find "/usr/bin/vncpasswd"
I tried to use the touch command and create the file vncpasswd, it worked as the vncserver launched but can't login from the VNC viewer as the file is empty as it keeps giving authentication failure.
|
After digging around I found this.
I found the link this.
You have to uninstall the tigervnc-server
Using the following command
# sudo yum remove tigervnc-server
After the uninstallation of the package is complete
Re-install the tigerVNC
# sudo yum install tigervnc-server
# systemctl daemon-reload
# vncpasswd root
root is the user I'm setting the password for you can change on your preference.
After you run the above command you will get to set the password and verify it
Post after all the above command are executed, execute the below command
# vncserver
After this you will be able to start VNC server and further connect with VNC viewer.
| VNC server fails to start with an error vncserver: couldn't find "/usr/bin/vncpasswd |
1,694,680,815,000 |
I want to configure my Zathura so that when I open a document automatically the size of the page of pdf adjusts to the size of the window.
Practical example
If i close the Zathura like this:
When i open it again it stays the same size.
I want that when I open it again it automatically adjusts to the width of the window regardless of what state it had previously.
An example of what I mean is when you press the "s" it automatically adjusts to the entire window.
I tried this in zathurarc but doesn't work. Even with "width" but it does nothing regardless of the option I tried changing colors and worked so I know that zathura is reading the file correctly.
set adjust-open "best-fit"
Thx.
|
I was also looking for a solution to this, and like you, I couldn't get adjust-open to work either. I also couldn't find any discussion about this in the Zathura issue tracker. The other answer mentioning set adjust_window width doesn't work.
Unfortunately, I couldn't find a solution within Zathura itself, but after some fiddling with xdotool, I whipped up the following shell script:
#!/bin/sh
zathura "$@" & PID="$!"
while true; do
window_id="$(xdotool search --onlyvisible --pid "$PID")"
if [ -n "$window_id" ]; then
xdotool windowactivate --sync "$window_id" windowfocus --sync "$window_id" \
key s key --delay 0 g g
break
fi
done
This launches zathura and waits for a window to be created for that process ID. Then it activates and focuses the window, and sends the s keystroke to fit the document to the window width, followed by gg to go to the beginning. I don't like Zathura remembering the document position, but you can remove the second key --delay 0 g g if you don't need this.
I also have the following in my ~/.config/zathura/zathurarc file:
set window-height 3000
set window-width 3000
This starts the window "maximized".
Note that xdotool only works on X11, and I'm not aware of alternatives for Wayland.
This could be much easier if Zathura allowed the feedkeys shortcut function to be used outside of map, or any of the other shortcut functions for that matter.
Hope this helps someone else.
| Make Zathura open by default a document with the window size |
1,694,680,815,000 |
O.k. Due to my cable company mistakenly cutting the line for my internet connection I am stuck with using my phone tether to get internet access. They say it will be two to three weeks before they can fix it. A month is longer than I'm really willing to wait. my only other option is DSL and I don't have a phone line installed so I'm pretty sure the same issue would arise.
Getting the tether to my web browser is a cake walk, but the Software Center and Terminal programs will not access the internet. the Terminal adds before my proxy address and the Software Center just doesn't work and I have no information as to why. I am simply looking to upgrade my system so all the people that will give me an opinion versus a solution I have time to ignore, but I would really like full access to my system and not just be able to access the internet via my web browser. Help would be appreciated.
|
As I have been informed that a comment is not an answer... I solved my issue with the terminal, not sure about the Software Manager as I know how to install software with the Terminal making that piece irrelevant as I have access to the whole system. The solution I used is located here; this has only been tested with terminal by me. I have used it on several distributions in Virtualbox just to see if it worked in a virtual setting and it does...
| Tethering android phone |
1,694,680,815,000 |
I have a pc with debian 9 stretch and a router (Nano Pi r4s with openwrt) both with bind9. I have set the min-cache-ttl parameter of 80000 seconds on debian stretch, and when i try to set it also on the nano pi, it tells me that the maximum can reach 90 seconds !! How is it possible? How can I set a higher value ?? Thank you
debian 9 (/etc/bind/named.conf.options):
options {
directory "/var/cache/bind";
listen-on-v6 { none; };
recursion yes;
allow-transfer { none; };
dump-file "/var/cache/bind/cache.db";
notify no;
allow-notify { none; };
forward only;
forwarders {
8.8.8.8;
};
//========================================================================
// If BIND logs error messages about the root key being expired,
// you will need to update your keys. See https://www.isc.org/bind-keys
//========================================================================
dnssec-validation no;
auth-nxdomain no; # conform to RFC1035
attach-cache yes;
min-cache-ttl 86400;
max-cache-ttl 87000;
max-cache-size 1024M;
};
Nano PI R4S (/etc/bind/named.conf):
options {
directory "/var/cache/bind";
dump-file "/var/cache/bind/cache.db";
listen-on-v6 { none; };
recursion yes;
allow-transfer { none; };
notify no;
allow-notify { none; };
forward only;
forwarders {
8.8.8.8;
};
auth-nxdomain no; # conform to RFC1035
dnssec-validation no;
attach-cache yes;
min-cache-ttl 80000; ## ERROR! Max is 90!
max-cache-ttl 43200;
max-cache-size 1024M;
};
|
How can I set a higher value ?
Get bind-9.14 sources, change the value of the MAX_MIN_CACHE_TTL and compile the bind package for yourself
How is it possible?
Debian
Before bind-9.13 Debian had their own patch 0003-Add-min-cache-ttl-and-min-ncache-ttl-keywords.patch that added min-cache-ttl feature to their bind package.
Obviously the maximum value for min-cache-ttl was > 90s since there is no checks here https://sources.debian.org/patches/bind9/1:9.10.3.dfsg.P4-12.3+deb9u6/10_min-cache-ttl.diff/#L30
With bind-9.13 Debian removes http://metadata.ftp-master.debian.org/changelogs/main/b/bind9/unstable_changelog
there patch since upstream already ported this feature in that version.
OpenWRT
OpenWRT compiles there bind package directly from ISC source files.
Here makefile https://github.com/openwrt/packages/blob/master/net/bind/Makefile
PKG_VERSION:=9.18.4
PKG_SOURCE_URL:= \
https://www.mirrorservice.org/sites/ftp.isc.org/isc/bind9/$(PKG_VERSION) \
https://ftp.isc.org/isc/bind9/$(PKG_VERSION)
Also for bind-9.14.2
https://github.com/openwrt/packages/commit/868f29d4ee61205e65994f67f23a02198a9dea33#diff-eb969664858d3b384948d5ecec074c7cf894444a8e293aebb09d21720a00f5b5
In bind min-cache-ttl was added at Nov 14, 2018 and released in version 9.13.4
commit https://github.com/isc-projects/bind9/commit/e9a939841dcf37021aab189caee836bfb59b45dc
min-cache-ttl max value defined here
https://github.com/isc-projects/bind9/commit/e9a939841dcf37021aab189caee836bfb59b45dc?diff=unified#diff-d67681a4334d52b7a3e6aa8ff9a56072834cf2f4e5158cbfd4cb3b232c731bf7R24
#define MAX_MIN_CACHE_TTL 90
https://github.com/isc-projects/bind9/commit/e9a939841dcf37021aab189caee836bfb59b45dc?diff=unified#diff-d6bb5d421804dd0a1b7bd92dcd1f76348321360d9dbf5512257c4753cc815443R972
static intervaltable intervals[] = {
...
{ "min-cache-ttl", 1, MAX_MIN_CACHE_TTL }, /* 90 secs */
...
};
So in the bind and therefore in openwrt the maximum value for min-cache-ttl was always 90 from the very beginning.
| bind9 configuration file PROBLEM |
1,694,680,815,000 |
I am trying to emulate an mtp device with virt-manager. I see at https://qemu-project.gitlab.io/qemu/system/devices/usb.html that usb-mtp,rootdir=dir as in -device usb-mtp,rootdir=dir could be used to do this with qemu. How can I configure virt-manager to do this (custom xml welcome)?
|
I wasn't able to find an xml equivalent of the command, but I was able to change the xml to run the command directly. I needed to insert xmlns:qemu="http://libvirt.org/schemas/domain/qemu/1.0" in the beginning <domain> element and kept the existing type="kvm" and put
<qemu:commandline>
<qemu:arg value="-device"/>
<qemu:arg value="usb-mtp,rootdir=dir"/>
</qemu:commandline>
as the last element before the closing </domain>. https://blog.vmsplice.net/2011/04/how-to-pass-qemu-command-line-options.html and #virt irc on OFTC was helpful with this. Warning: if a change doesn't parse correctly when saving via the virt-manager xml editing pane then it may be removed so take necessary precautions before saving and to check after.
| How to emulate usb-mtp device with virt-manager? |
1,694,680,815,000 |
I created a partition called sdb1 in fat32 and created 3 folders within a main folder however I wanted the 3 folders to have different permissions.
I tried to make an ana folder with all permissions, the marco with permissions for the user and group to execute and the opencloud with permissions for everything but the group. However the end result was that all the folders had all permissions.
I dont understand what i am doing wrong.
|
First, you are using the FAT32 filesystem, which does not support Unix-style file ownerships and permissions. But because Unix-like operating systems assume that all files must have an owner, group and permissions, the vfat filesystem driver fakes it - by assigning all files and all directories in the filesystem the same permissions.
You can adjust the fake permissions created by the filesystem driver: by using the dmask mount option you can set the permissions for all directories on the filesystem, and with fmask for all regular files respectively. These options are specific to the vfat filesystem driver, and won't work with just any filesystem. The drivers for other filesystems that don't natively support Unix-style ownerships/permissions may have similar mount options, or some other ways to adapt the filesystem to Unix-like conventions.
If you need to be able to assign different permissions to different files and/or directories within a single filesystem, FAT32 (or any FAT subtype really) is a wrong filesystem type for that.
Second, you haven't really made three separate folders: you've actually mounted one filesystem (on partition /dev/sdb1) to three separate locations. So if you created a file to /data/ana, the same file would be immediately accessible at /data/marco and /data/opencloud too.
Mounting the same filesystem to multiple locations in a single system simultaneously used to be impossible until relatively recent times (roughly, about the same time the container technology was being developed; it might have been a side effect of that). As such, the vfat filesystem driver apparently cannot handle multiple mounts of the same filesystem with different permission options. It looks like /data/ana might be the most recent mount, so it looks like the most recent set of mount options for the filesystem takes effect on all mounts (think "views") of that filesystem.
| Fstab permissions |
1,694,680,815,000 |
I reinstalled my Debian 11 and try configure it but everything goes wrong.
I need to reset it as fresh new.
What i tried :-
dconf-cli command
replace user .config with root .config
but things didn't effect that much.
|
Best solution I found was from https://www.publish0x.com/a-bit-of-everything/restore-default-panel-in-lubuntu-2004-xmdjogv :
$ sudo cp /usr/share/lxqt/panel.conf ~/.config/lxqt/
$ reboot
This solution only modifies the relevant config file, and avoids re-creating the user.
| how to reset debian 11 lxqt desktop configuration |
1,694,680,815,000 |
I'm working on replacing remotes by using ir-keytable and ir-ctl commands, and rc_keymap TOML config files, as they have replaced lirc. I'm using Raspberry OS bullseye
One of my remotes use an unknown protocol, so I decided to store the raw signals.
I started by storing each button signal using ir-ctl -rMY_KEY.txt --mode2 -r --device=/dev/lirc1 -1
As I know the specific remote uses a carrier of 38kHz, I appended each signal by carrier 38000 as explained in the man.
If I try to send the signal using ir-ctl -sMY_KEY.txt, it triggers the expected action.
Now, instead of having 1 file per button, I want to store the remote using a rc_keymap TOML file, as explained here
Since I don't know the protocol, I use the raw one, so I used the command ir-ctl --mode2 -r --device=/dev/lirc1 -1 to retrieve each button's signal, then copy-pasted int my TOML file.
I now do ir-ctl -kmy_remote.toml -KMY_kEY. Nothing happens.
If I now do the same command, with a --carrier 38000, it does work, with the warning warning: carrier specified but overwritten on command line
My question is: Where is the carrier value is defined in a rc_keymap config file, and how can I override it? I see nothing about it in the doc.
|
I had to read the C code of ir-ctl to take a guess, and found the answer. We can simply add a carrier field in he TOML file, to set this value.
So, this does work
[[protocols]]
name = "MY_REMOTE"
protocol = "raw"
carrier = 38000
| IR: Specifying a carrier value with rc_keymap |
1,694,680,815,000 |
Using a script I want to modify the /etc/pam.d/common-session file based on the existing value:
if session required pam_mkhomedir.so exist, then add skel=/etc/skel/ umask=0022
if it's not present, add entire line session required pam_mkhomedir.so skel=/etc/skel/ umask=0022
if it's commented (#), uncomment it (and apply rules 1 and 2)
What would be the simplest way to achieve this using shell script?
|
Here's the solution with GNU sed. I don't know if it's the simplest, but I've reduced your logic a bit:
#!/bin/bash
if [[ $(grep "session required pam_mkhomedir.so" $1) ]]
then $(sed -i 's/#\?session required pam_mkhomedir\.so.*/session required pam_mkhomedir.so skel=\/etc\/skel\/ umask=0022/' $1)
else $(sed -i '$ a session required pam_mkhomedir.so skel=/etc/skel/ umask=0022' $1)
fi
Change the line that contains session required pam_mkhomedir.so to line session required pam_mkhomedir.so skel=/etc/skel/ umask=0022.
This covers the case when the line is commented (#\? zero or one #), as well as if it doesn't have skel and umask options specified.
Else, if there is no such line, append line session required pam_mkhomedir.so skel=/etc/skel/ umask=0022 to the end of the file.
You can call the script like ./script.sh /etc/pam.d/common-session or without parameters if you change $1 for the file path
| Add parameter to existing config file if present |
1,643,600,211,000 |
I have a custom command line script that I want the users to access over SSH. For example when the user logs in with ssh user@server the user should be able to interact with the command line application instead of directly accessing /bin/bash (or any other shell).
Are there any configuration changes that should be done to achieve this with OpenSSH?
|
sshd_config's ForceCommand is what you're looking for.
Also, caveat: you say it's a shell script, so no matter what you do, it will still be executed by a shell. This need not be problematic, but it certainly means you need to think about what you do with user input – e.g. eval'ing it should be out of the question.
| Using ssh authentication with a custom application |
1,643,600,211,000 |
You will find that echo can't be disabled. It's ignored and it and does not throw any error either when you restart the php-fpm service. There's probably no real reason that would lead you to disabling echo but extra knowledge is always welcome.
A test config can be:
php_admin_value[disable_functions] = echo,exec,shell_exec,phpinfo
The other functions being "control" functions, to ensure the pool is being read properly. You will find that echo simply is ignored but the other functions are disabled correctly.
Link https://www.php.net/manual/en/ini.core.php says
Only internal functions can be disabled using this directive.
User-defined functions are unaffected.
So why is echo not affected? Is it not an internal function?
|
The reason ECHO is not affected is because ECHO is not a function.
Info found in https://www.php.net/manual/en/function.echo.php
echo is not a function but a language construct. Its arguments are a
list of expressions following the echo keyword, separated by commas,
and not delimited by parentheses. Unlike some other language
constructs, echo does not have any return value, so it cannot be used
in the context of an expression.
Paradoxically the ECHO information is found in the "String Functions" section in the PHP manual and even the link to the page says as seen above "/manual/en/function.echo.php"
| Why can't ECHO be disabled (disable_functions) in PHP-FPM pools or php.ini? |
1,643,600,211,000 |
I have several squid issues, but one at a time:
WARNING! Your cache is running out of filedescriptors
This can happen when the proxy are getting a lot of calls, and can be fixed by increasing the limit, but mine isn't even "open" yet..
I found out that it's squid somehow constantly connecting to it self?
(from my access.log)
1628674032.019 59108 192.168.0.129 NONE/200 0 CONNECT 192.168.0.129:3129 - ORIGINAL_DST/192.168.0.129 -
1628674032.019 59098 192.168.0.129 NONE/200 0 CONNECT 192.168.0.129:3129 - ORIGINAL_DST/192.168.0.129 -
1628674032.019 59087 192.168.0.129 NONE/200 0 CONNECT 192.168.0.129:3129 - ORIGINAL_DST/192.168.0.129 -
My configuration was originally created by pfsense, but is used on a stand-alone squid running on Ubuntu 20.04.
# This file is automatically generated by pfSense
# Do not edit manually !
acl all src all
http_access allow all
http_port 3128 ssl-bump generate-host-certificates=on dynamic_cert_mem_cache_size=10MB cert=/usr/local/squid/etc/ssl_cert/myCA.pem cafile=/usr/local/squid/etc/ssl_cert/myCA.crt capath=/usr/local/squid/etc/rootca/ cipher=EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:!RC4:!aNULL:!eNULL:!LOW:!3DES:!SHA1:!MD5:!EXP:!PSK:!SRP:!DSS tls-dh=prime256v1:/usr/local/squid/etc/dhparam.pem options=NO_SSLv3,NO_TLSv1,SINGLE_DH_USE,SINGLE_ECDH_USE
https_port 3129 intercept ssl-bump generate-host-certificates=on dynamic_cert_mem_cache_size=10MB cert=/usr/local/squid/etc/ssl_cert/myCA.pem cafile=/usr/local/squid/etc/rootca/ca-root-nss.crt capath=/usr/local/squid/etc/rootca/ cipher=EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:!RC4:!aNULL:!eNULL:!LOW:!3DES:!SHA1:!MD5:!EXP:!PSK:!SRP:!DSS tls-dh=prime256v1:/usr/local/squid/etc/dhparam.pem options=NO_SSLv3,NO_TLSv1,SINGLE_DH_USE,SINGLE_ECDH_USE
#tcp_outgoing_address 10.10.66.1
icp_port 0
#digest_generation off
dns_v4_first on
#pid_filename /var/run/squid/squid.pid
cache_effective_user proxy
cache_effective_group proxy
error_default_language en
#icon_directory /usr/local/etc/squid/icons
visible_hostname Satan
cache_mgr admin@localhost
access_log /var/log/squid/access.log
cache_log /var/log/squid/cache.log
cache_store_log none
netdb_filename /var/log/squid/netdb.state
pinger_enable on
pinger_program /usr/lib/squid/pinger
sslcrtd_program /usr/lib/squid/security_file_certgen -s /usr/local/squid/var/logs/ssl_db -M 4MB -b 4096
tls_outgoing_options cafile=/usr/local/squid/etc/rootca/ca-root-nss.crt
tls_outgoing_options capath=/usr/local/squid/etc/rootca/
tls_outgoing_options options=NO_SSLv3,NO_TLSv1,SINGLE_DH_USE,SINGLE_ECDH_USE
tls_outgoing_options cipher=EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:!RC4:!aNULL:!eNULL:!LOW:!3DES:!SHA1:!MD5:!EXP:!PSK:!SRP:!DSS
sslcrtd_children 5
logfile_rotate 10
debug_options rotate=0
shutdown_lifetime 3 seconds
# Allow local network(s) on interface(s)
acl localnet src 192.168.0.0/24
forwarded_for delete
via off
httpd_suppress_version_string on
uri_whitespace strip
acl dynamic urlpath_regex cgi-bin \?
cache deny dynamic
cache_mem 2048 MB
maximum_object_size_in_memory 8192 KB
memory_replacement_policy heap GDSF
cache_replacement_policy heap LFUDA
minimum_object_size 0 KB
maximum_object_size 16 MB
cache_dir aufs /cache 10000 16 256
offline_mode off
cache_swap_low 90
cache_swap_high 95
cache allow all
# Add any of your own refresh_pattern entries above these.
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
refresh_pattern . 0 20% 4320
#Remote proxies
# Setup some default acls
# ACLs all, manager, localhost, and to_localhost are predefined.
acl allsrc src all
acl safeports port 21 70 80 210 280 443 488 563 591 631 777 901 3128 3129 1025-65535
acl sslports port 443 563
acl purge method PURGE
acl connect method CONNECT
# Define protocols used for redirects
acl HTTP proto HTTP
acl HTTPS proto HTTPS
# SslBump Peek and Splice
# http://wiki.squid-cache.org/Features/SslPeekAndSplice
# http://wiki.squid-cache.org/ConfigExamples/Intercept/SslBumpExplicit
# Match against the current step during ssl_bump evaluation [fast]
# Never matches and should not be used outside the ssl_bump context.
#
# At each SslBump step, Squid evaluates ssl_bump directives to find
# the next bumping action (e.g., peek or splice). Valid SslBump step
# values and the corresponding ssl_bump evaluation moments are:
# SslBump1: After getting TCP-level and HTTP CONNECT info.
# SslBump2: After getting TLS Client Hello info.
# SslBump3: After getting TLS Server Hello info.
# These ACLs exist even when 'SSL/MITM Mode' is set to 'Custom' so that
# they can be used there for custom configuration.
acl step1 at_step SslBump1
acl step2 at_step SslBump2
acl step3 at_step SslBump3
http_access allow manager localhost
http_access deny manager
http_access allow purge localhost
http_access deny purge
http_access deny !safeports
http_access deny CONNECT !sslports
# Always allow localhost connections
http_access allow localhost
request_body_max_size 0 KB
delay_pools 1
delay_class 1 2
delay_parameters 1 -1/-1 -1/-1
delay_initial_bucket_level 100
delay_access 1 allow allsrc
# Reverse Proxy settings
# Custom options before auth
ssl_bump peek step1
ssl_bump bump all
# Setup allowed ACLs
# Allow local network(s) on interface(s)
http_access allow localnet
# Default block all to be sure
http_access deny allsrc
other bonus questions are:
2. Do I need a http configuration (port 3128) when I'm only using https/ssl
Yes, apparently it's necessary
acl all src all (the first command in the configuration) results the following in syslog. It's only a warning, but how do I fix it?
Aug 11 12:28:46 socks squid[2718]: WARNING: because of this '::/0' is ignored to keep splay tree searching predictable
Aug 11 12:28:46 socks squid[2718]: WARNING: You should probably remove '::/0' from the ACL named 'all'
If you find anything else that's wrong, please say so, and if possible, explain why (so we can learn).
|
The bad guy in this scenario was actually a disabled option..
#tcp_outgoing_address 10.10.66.1
For some kind of reasons, the squid server didn't apparently know where to send the out-data, and sent it to itself, causing an neverending loop.
by enabling this command, and point it to my external ip, by doing this, a loopback was avoidable.
For me, this would be an unnecessary command, and can't understand why it's necessary, squid should know where the internet is..
| Squid (proxy) is eating up its own resources (and other issues) |
1,643,600,211,000 |
Is there a way to generate the default gpg.conf file? I can't find one in my fs with find / -name gpg.conf. I also tried checking gpgconf to see if it had an option for generating one, and looked for other gpg utils that come with the standard gnupg2 installation, but nothing stood out.
|
Note that the contents of gpg.conf are only used to set options that are not the default. So if you want the default options, there's no reason to have a gpg.conf file.
That said, the correct placement for gpg.conf is in the .gnupg folder and it is easily created with touch ~/.gnupg/gpg.conf.
| Generating default gpg.conf with default gnupg2 utils |
1,643,600,211,000 |
Hello and thanks for clicking into this for a look.
I noticed that in the arch wiki, under cryptdevice in dm-crypt you have this:
cryptdevice
This parameter will make the system prompt for the passphrase to unlock the device containing the encrypted root on a cold boot. It is parsed by the encrypt hook to identify which device contains the encrypted system:
cryptdevice=device:dmname
device is the path to the device backing the encrypted device. Usage of persistent block device naming is strongly recommended.
dmname is the device-mapper name given to the device after decryption, which will be available as /dev/mapper/dmname.
If a LVM contains the encrypted root, the LVM gets activated first and the volume group containing the logical volume of the encrypted root serves as device. It is then followed by the respective volume group to be mapped to root. The parameter follows the form of cryptdevice=/dev/vgname/lvname:dmname
In this, I want to know why do some people say :root while some say cryptoroot and still some says vgname. In which I am very confused as to which one should be the official one? I did :root:allow-discards and it worked very well. In this I ask for you take on it. This line is only edited if you want to create an encrypted arch btw.
Thanks for taking a look again and have a safe day.
|
You can use whatever you want for the dmname parameter, just make sure to use the same name when referring to the device at other places (e.g. in fstab) or use UUID. When opening the device manually using cryptsetup (cryptsetup luksOpen <device> <name>), you'll also need to specify a name, which also can be whatever you want, this is the same case. It is even possible to use a different name every time the device is opened (but that would be impractical for system devices which needs to be mounted etc.).
When opening the encrypted device, cryptsetup creates a new device mapper device on top of the encrypted device which (from system point of view) is not encrypted (system sees a "normal" device with ext4 filesystem, the only difference is that all writes to it are encrypted before writing the data to the underlying block device) and you need a name for it and as I already said, you can use any name you want. Some tools like UDisks and systemd use luks-<UUID> just to make sure the name is unique system wide, but it's not necessary.
This is how encrypted (unlocked) partition looks in Fedora with the luks-<UUID> name:
└─sda2 8:2 0 930,5G 0 part
└─luks-094c2ba3-eb59-48fe-83ab-eca3fe533c03 253:0 0 930,5G 0 crypt
and this is the /dev/mapper symlink:
$ ls -la /dev/mapper/luks*
lrwxrwxrwx. 1 root root 7 19. pro 08.25 /dev/mapper/luks-094c2ba3-eb59-48fe-83ab-eca3fe533c03 -> ../dm-0
| What is "dmname" in Arch linux grub config |
1,643,600,211,000 |
(see update below)
Using KDevelop 5.6.0 under Fedora with Plasma desktop.
I pressed a wrong keyboard shortcut with Ctrl+Shift instead of Ctrl+Alt (or vice versa) and this "damaged" the bottom pane of the window where Konsole, compiling report, find across files, … display their results.
Since I was in an editing task, I didn't notice the issue until I needed to activate Konsole. So I don't remember which key combination I typed.
Now the "Build" button at top does nothing. If you press the Konsole button at bottom, "something" covers the concole pane and hides the edit window without resizing it.
Even worse, when you start KDevelop, this bottom pane is used by nothing and you see the desktop through it.
In short, KDevelop has become unusable. I suspect some configuration file has been damaged. I don't think any project conf file is involved (I've looked at <project>.kdev4 and all files in .kdev4 without seeing unusual settings). I think that the general KDevelop UI configuration is somewhere else. I didn't find anything of interest in ~/.config nor in ~/.kde.
I disinstalled and reinstalled KDevelop but, of course, this didn't purge any configuration file in user home directories.
What can I do to recover KDevelop full functionality?
Presently, it is reduced to Kate and I'm forced to pass commands in a separate console.
EDIT KDevelop works like a charm under a different user account. The question above sums up to purging some hidden file(s), but which and where? I searched ~/.cache/ without success.
EDIT - 2020-12-09
Former title was: KDevelop: bottom tool pane damaged
After a careful analysis of what is displayed on screen, I now think that the desktop manager (KDE Plasma) or window manager (KWin) configuration has been damaged because the windows lack their frame stroke. Also, the top window bar is a pure rectangle without the top rounded corners.
Which file should be reset?
|
I finally backed up all valuable files and directory and deleted my account.
I recreated it and reloaded my files making sure that no .config/, .kde/, .cache/, .kdevelop/ or other hidden directory was restored.
Of course, I lost my custom configurations but Plasma KDE and KDevelop are now behaving as expected.
Just for the record, after reinitialisation, CMake inside KDevelop insisted to use ninja instead of make. Quite tricky to change and this requires to quit and reload KDevelop.
| KDE Plasma or KWin: configuration file damaged - how to reset? |
1,643,600,211,000 |
In HLWM unfocused windows have their opacity reduced to something like 80%-90%. I would like to remove this feature and I can't seem to find the correct config options for this.
herbstlufwm version 0.8.3
|
Herbstluftwm doesn't handle window opacity, this is handled by the compositor, in my case picom. In picom.conf I have set inactive-opacity = 1 to obtain the desired outcome.
| How to set herbstluftwm window transparency? |
1,643,600,211,000 |
Here I have installed packetbeat. So in the time learning the configuration for packetbeat I saw a file fields.yml in the directory /etc/packetbeat. I tried to get any information about that file but do not get source.....
Please can anybody tell me how work that file, or send any documentation?
|
To preface this, I'm not a packetbeat user, but I have used heartbeat, winlogbeat, filebeat, etc...
Generally speaking fields.yml contains fields that your beat specifically uses. Some of these fields are configured to certain values by default, for example "name" which defaults to your server's hostname, @timestamp which tracks time, in filebeat's case, "when the event log record" was generated.
Other fields may be filled out by you manually in the main YAML of the beat you're using. Examples of this include "fields" which is intended to store user-generated fields, and "tags" which allows both you and other pieces of the elastic stack to tag data for identification, you may have seen logstash tag an event with "_jsonparsefailure" or you may tag your own data to correlate it easily with a particular system or network. In most beats however, the majority of the fields included in fields.yml are fields that pertain to the type of information you are collecting, the main exception being filebeat, as it is intended to be flexible. This means that for winlogbeat, auditbeat, packetbeat, etc.. Every field for every supported event is contained in this file.
Expanding on this to pertain to Packetbeat specifically, because packetbeat only generates events which contain predefined fields(unlike filebeat), all possible fields that packetbeat uses are listed in fields.yml. You can view these fields on a per-protocol basis in:https://github.com/elastic/beats/tree/master/packetbeat/protos
Under any protocol you have an interest in i.e. .../icmp/_meta/fields.yml
The post-compilation fields.yml you see is an aggregation of each of these individual fields.yml files.
There isn't a lot of documentation that I could find regarding this, so as far as saying a definitive reason for why this file needs to exist. My guess is that this yaml file is used to generate an Index Template for elasticsearch, on a per-beat basis, which is why it is present in the config directories of most beats.
Related reading regarding index templates can be found here:
https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-templates.html
You can grab the fields your index is mapped to use either by:
GET /index-name
(If you used the beat's index template)
or using this guide
https://www.elastic.co/guide/en/beats/packetbeat/master/packetbeat-template.html
| What is fields.yml file |
1,643,600,211,000 |
I installed fail2ban but When I enter fail2ban-client status in the Linux terminal, I get the following warnings:
fail2ban.configreader [1616]: ERROR Found no accessible config files for 'filter.d/murmur' under /etc/fail2ban
fail2ban.configreader [1616]: ERROR No section: 'Definition'
fail2ban.configreader [1616]: ERROR No section: 'Definition'
fail2ban.configreader [1616]: ERROR No section: 'Definition'
fail2ban.configreader [1616]: ERROR No section: 'Definition'
fail2ban.configreader [1616]: ERROR No section: 'Definition'
fail2ban [1616]:ERROR Failed to access socket path: /var/run/fail2ban/fail2ban.sock. Is fail2ban running?
By the way, I know there is a problem with the configuration, but I do not know how to configure it
|
I solved the problem. The problem was that the fail2ban folder was empty in the path / etc / fail2ban. I downloaded the pack from the official site fail2ban.org and transferred the contents of this pack to / etc / fail2ban.
| How to configure fail2ban file? |
1,643,600,211,000 |
I would like to have auto-completion on the geany text editor (no, it's not an IDE) and for that you use .tags files. There's a plugin that creates .tags files for your programs, but I need to make one for an external C++ library (SFML, more specifically). The goal is to have geany display all the possible classes when I type "sf::" and show the methods of said classes when I type those. This would be a huge deal because I could learn SFML much faster this way, without having to search things so much. Thanks.
|
Ok I found the solution. After searching a bit through my folders and reading some geany documentation, I could figure out the following steps to create a library's .tags file.
Go to /usr/include/ and search for the library. In my case, it was SFML so I found the folder SFML and in there I saw all the files related to it.
Do pkg-config --list-all | grep <library name>. This will let you see how the library is referenced in pkg-config. I searched for "sfml" and found that to include all modules, I should use "sfml-all".
With the information you collected, run the following command to create the .tags file replacing where needed CFLAGS=`pkg-config --cflags <pkg library name>` geany -gP path/to/save/tagfile/something.cpp.tags path/to/library/headers
This should create the tags file. Then you should import it to geany by saving it in $HOME/.config/geany/tags or by importing it via GUI with Tools->Load tags file.
Here is the command I used for SFML, as an example:
CFLAGS=`pkg-config --cflags sfml-all` geany -gP /home/username/sfml.cpp.tags /usr/include/SFML/*/*.hpp
See how at the end I included all folders in "SFML" with *, and then included all header files by using the wildcard to specify all files that end in .hpp
| How can I make a geany .tags file for a C++ library? |
1,643,600,211,000 |
I have a system with both a public (e.g server1.foo.bar) and privately-resolvable (e.g. server1.internal.foo.bar) DNS name. SSH connections are only possible via the private IP, but I always think of these hosts in terms of their public name.
I would like to:
connect to the right IP regardless of whether I remember to use the *.internal.bar pattern
save keystrokes
I'm aware of the substitution tokens such as %h that can be used to modify the hostname given at the commandline, e.g.
Host foo
Hostname %h.some.other.domain
The behavior I'm looking for would be something like:
Host *.foo.bar
Hostname %m.internal.foo.bar
Where %m gets substituted with just the portion of the given hostname up to the first dot. I've read man 5 ssh_config as well as https://man.openbsd.org/ssh_config and couldn't find the answer, if one even exists. I'm using macOS 10.15.4:
$ ssh -V
OpenSSH_8.1p1, LibreSSL 2.7.3
|
I figured out a working method. It's a bit ugly, but it does work. This leverages the ProxyCommand directive (see manpage) combined with a small bash helper script.
Add sections like this to ~/.ssh/config — one for each domain you want remapped:
Match final host="!*.corp.foo.com,*.foo.com"
ProxyCommand ssh_ext2int %h %p
Match final host="*.quux.biz"
ProxyCommand ssh_ext2int %h %p
Save this in your $PATH somewhere as ssh_ext2int — and chmod u+x it:
#!/usr/bin/env bash
[ $# -eq 2 ] || exit 1
typeset -A dmap
dmap=(
[foo.com]=corp.foo.com
[quux.biz]=internal.qux.lan
)
d=${1#*.}
h=${1%%.*}
nd=${dmap[$d]:-$d}
/usr/bin/nc -w 120 $h.$nd $2 2>/dev/null
Now, ssh server158247.quux.biz should connect you to server158247.internal.qux.lan
| How can I configure ~/.ssh/config such that `ssh foo.bar` results in a connection to `foo.internal.bar`? |
1,643,600,211,000 |
I want to customize my Zathura to allow me to type #name_of_a_bookmark, hit Enter and get sent to this bookmark.
Based on the answer of another question on this site I figured out that this can be done using the map command in the zathurarc file and the command should be similar to this: map # focus_inputbar ":blist ".
Then I found out that this doesn't work because # is the symbol which starts a comment in the zathurarc file. (Using another symbol like - instead of # worked so everything else in the command is correct.)
Is there a way to escape # in the zathurarc file so I can use it for key bindings?
|
Simply escaping it with a backslash works.
map \# focus_inputbar ":blist "
| How to bind a command to the # symbol in zathura |
1,643,600,211,000 |
I'm trying to script changes in a config file (/var/lib/polkit-1/localauthority/10-vendor.d/com.ubuntu.desktop.pkla), which boils down to:
[Update already installed software]
Identity=unix-group:admin;unix-group:sudo
Action=org.debian.apt.upgrade-packages
ResultActive=yes
[Printer administration]
Identity=unix-group:lpadmin;unix-group:admin;unix-group:sudo
Action=org.opensuse.cupspkhelper.mechanism.*
ResultActive=yes
[Disable hibernate by default in upower]
Identity=unix-user:*
Action=org.freedesktop.upower.hibernate
ResultActive=no
I'm trying to set ResultActive=yes for all blocks that start with [Disable hibernate. Using sed and regex groups I came up with:
sed -i 's/\(Disable hibernate.*\n.*\n.*\nResultActive\=\)no/\1yes/' /var/lib/polkit-1/localauthority/10-vendor.d/com.ubuntu.desktop.pkla
However this does not change the file. According to regexr, the regex matches, but checking with sed.js.org, sed doesn't change a thing.
How can I fix my sed command, to change the appropriate line for the hibernate config blocks?
edit: It seems like I cant get sed groups with newlines not to work at all.
|
Sed is line-based by default - to do multi-line matches you would need to add lines to the pattern space explicitly (using the N command for example).
Instead you could do something like this which is still line-based but steps forward to load the line of interest:
$ sed '/^\[Disable hibernate/{n;n;n;/^ResultActive/s/=no/=yes/;}' file.pkla
[Update already installed software]
Identity=unix-group:admin;unix-group:sudo
Action=org.debian.apt.upgrade-packages
ResultActive=yes
[Printer administration]
Identity=unix-group:lpadmin;unix-group:admin;unix-group:sudo
Action=org.opensuse.cupspkhelper.mechanism.*
ResultActive=yes
[Disable hibernate by default in upower]
Identity=unix-user:*
Action=org.freedesktop.upower.hibernate
ResultActive=yes
Note that like your original, this will only work if the order of lines in the block can be relied on.
| Unable to change config file with sed using groups and multiple lines |
1,643,600,211,000 |
My PC freezes a third time and I have to shut it down forcibly. Why doesn't journalctl save the boot logs before forced shutdown? When I do journalctl --list-boots I only get the boot after crash.
I'm not sorting well or misconfiguration?
System: ArchLinux (5.4.8-arch1-1)
|
It depends on the freeze, and it's not clear to me what do you mean by "shut it down forcibly". If this means a power off, definitely, there is no way for your PC to sync data in a proper way. The recommended way to force reboot, is by using the magic SysRq key and the REISUBsequence. More details are at
https://wiki.archlinux.org/index.php/Sysctl
https://en.wikipedia.org/wiki/Magic_SysRq_key
Also, check that /var/log/journal exists, writable, persistent and you have enough free space
| Why journalctl don't save my logs of the boot before forced shutdown? |
1,643,600,211,000 |
I've installed pgadmin4 through software manager in Linux Mint Tricia. Also I've installed Postgres10. But when I create a new database and try to restore newly created database. This error message is showing:
/usr/bin/pg_restore file not found. Please correct the Binary Path in the Preferences dialog"
I dont' know how to solve this problem. For further clarification I've added the a screenshot of the error message.
|
You mention Linux Mint, which I believe uses Ubuntu packaging.
According to the file list for the postgresql-client-10 Ubuntu Bionic package, the location of that program is /usr/lib/postgresql/10/bin/pg_restore. If you are using the Debian based Linux Mint, I think they use the same locations.
If you installed postgresql-10 from a different location, they could have used different paths. Update your question with where your packages come from for better responses.
You would need to make sure you have postgresql-client-10 installed and then update the binary paths in pgadmin preferences so it can find pg_restore and the other installed binaries.
| '/usr/bin/pg_restore' file not found. Please correct the Binary Path in the Preferences dialog |
1,574,272,699,000 |
Lets say I have 4 java instances running on my linux system, in separate directories, all of which have a config file named config.yml. I want to be able to edit 1 config.yml and have it copy across the multiple directories in real time. For example:
../dir1/config.yml
../dir2/config.yml
../dir3/config.yml
../dir4/config.yml
I want dir2 and dir3 to reference the config.yml in dir1. Is there a linux-based program or software that will allow me to do this? Or allow me to quickly sync the config.yml file across the directories?
In addition to this question, would it be possible to have it sync across multiple systems, too?
Thanks in advance!
|
Use symbolic links
Keep your /path/dir1/config.yml file and link the others
ln -s /path/dir1/config.yml /path/dir2/config.yml
ln -s /path/dir1/config.yml /path/dir3/config.yml
ln -s /path/dir1/config.yml /path/dir4/config.yml
those 3 lines are making "shortcuts" to the dir1 config. Then if you edit the file from any of those path, it will change the "dir1" file.
| Sync 1 config file amongst multiple directories |
1,574,272,699,000 |
BusyBox v1.28.4 () built-in shell (ash)
_______ ________ __
| |.-----.-----.-----.| | | |.----.| |_
| - || _ | -__| || | | || _|| _|
|_______|| __|_____|__|__||________||__| |____|
|__| W I R E L E S S F R E E D O M
-----------------------------------------------------
OpenWrt 18.06.2, r7676-cddd7b4c77
-----------------------------------------------------
root@mortar:~# opkg upgrade $(opkg list-upgradable | cut -d ' ' -f 1)
Configuring luci-lib-nixio.
Configuring luci-lib-jsonc.
Configuring luci-base.
Configuring luci-mod-admin-full.
uci: Parse error (section of different type overwrites prior section with same name) at line 12, byte 23
Configuring luci-app-firewall.
Configuring luci-proto-ppp.
Configuring luci-proto-ipv6.
Configuring luci.
Configuring luci-ssl.
Configuring luci-app-upnp.
uci: Parse error (section of different type overwrites prior section with same name) at line 12, byte 23
Helpful :-). Parse error in which file?
|
# cd /etc/config
# uci export 2>&1 | grep -C5 '^uci:'
option ports '5190'
option comment 'AOL, iChat, ICQ'
config default
option target 'Express'
uci: Parse error (section of different type overwrites prior section with same name) at line 12, byte 23
option proto 'udp'
option pktsize '-500'
config reclassify
option target 'Priority'
# grep -r 'config reclassify' .
./qos:config reclassify
# mv qos ~
# uci export 2>&1 | grep -C5 '^uci:'
#
| uci: Parse error (section of different type overwrites prior section with same name) at line 12, byte 23 |
1,574,272,699,000 |
Recently I've downloaded SuperCat but I can't get it to work. There is always a message:
can't find a config file
Although I've created it in ~/.spcrc/spcrc.
Anyone knows what's going on?
|
The name of your config file is incomplete. The associated file type is missing.
Here is an excerpt from https://www.linuxjournal.com/node/1005732
All rules for colorizing text are stored in files with names of the
form spcrc-ext, where "ext" is the file type. These files are stored
in the configured "system directory" or else your private ~/.spcrc
directory. For example, the rules for a C source code file in spcrc-c
might look like:
# HTML COLOR NAME COL A N T STRING or REGULAR EXPRESSION
#################### ### # # # ################################################################
Red red b ([a-zA-Z][a-zA-Z0-9_]*)[[:space:]]*\(
Brown yel b (while|for|if|switch|main|return)[[:space:]]*\(
Brown yel b (else)
Cyan cya b [[:space:]]*(include)
Green grn (do)[[:space:]]*\{
Cyan cya (case|return|default)
Green grn (break|continue)[[:space:]]*;
Magenta mag (int|char|short|float|double|long|unsigned)[[:space:]]
Blue blu b [^[:alnum:]_]([[:digit:]]+)
Brown yel "(.*)"
Brown yel <(.*)>
Magenta mag c :;
| SuperCat can't find configuration |
1,574,272,699,000 |
I want to simulate geographic distance, so I need to add delay to each receive/transfer package.
I read that squid could work, however, playing with its configuration haven't given me the right results.
If I'm pinging from my own PC to squid server I'm getting ~0.5 time, it's okay (would prefer ~1 time)
If I'm pinging between servers it's only ~0.255 time
I would be glad to find the optimal solution for bad traffic.
Squid: Version 3.5.12
OS: Ubuntu 16.04.3 LTS
|
You can use tc to manually add latency to packages.
There are ready-made packages to emulate wide-area networks including delays, dropped packages etc., e.g. netem.
| simulate geographic distance between servers |
1,574,272,699,000 |
Here it is in context:
[ 0.507474] i8042: PNP: No PS/2 controller found.
[ 0.507568] mousedev: PS/2 mouse device common for all mice
[ 0.507683] device-mapper: uevent: version 1.0.3
[ 0.507809] device-mapper: ioctl: 4.39.0-ioctl (2018-04-03) initialised: [email protected]
[ 0.508081] hidraw: raw HID events driver (C) Jiri Kosina
[ 0.508169] usbcore: registered new interface driver usbhid
[ 0.508186] usbhid: USB HID core driver
[ 0.508401] drop_monitor: Initializing network drop monitor service
[ 0.510160] Initializing XFRM netlink socket
[ 0.510252] NET: Registered protocol family 10
[ 0.513721] Segment Routing with IPv6
[ 0.513769] mip6: Mobile IPv6
[ 0.513822] NET: Registered protocol family 17
[ 0.516773] RAS: Correctable Errors collector initialized.
[ 0.516822] AVX2 version of gcm_enc/dec engaged.
[ 0.516832] AES CTR mode by8 optimization enabled
[ 0.535396] sched_clock: Marking stable (535392296, 0)->(356708768710, -356173376414)
[ 0.541048] registered taskstats version 1
[ 0.541070] Loading compiled-in X.509 certificates
[ 0.549865] Key type big_key registered
[ 0.553967] Key type encrypted registered
[ 0.553984] ima: No TPM chip found, activating TPM-bypass! (rc=-19)
[ 0.553999] ima: Allocated hash algorithm: sha1
[ 0.554183] xenbus_probe_frontend: Device with no driver: device/vbd/51712
[ 0.554195] xenbus_probe_frontend: Device with no driver: device/vbd/51728
[ 0.554205] xenbus_probe_frontend: Device with no driver: device/vbd/51744
[ 0.554216] xenbus_probe_frontend: Device with no driver: device/vbd/51760
[ 0.554227] xenbus_probe_frontend: Device with no driver: device/vif/0
[ 0.554241] Magic number: 1:252:3141
[ 0.554301] hctosys: unable to open rtc device (rtc0)
[ 0.556156] Freeing unused kernel image memory: 2172K
[ 0.841038] Write protecting the kernel read-only data: 20480k
[ 0.843299] Freeing unused kernel image memory: 2024K
[ 0.843560] Freeing unused kernel image memory: 152K
[ 0.843700] rodata_test: all tests were successful
[ 0.985918] blkfront: xvda: flush diskcache: enabled; persistent grants: enabled; indirect descriptors: enabled;
[ 0.993816] xvda: xvda1 xvda2 xvda3
[ 1.001625] blkfront: xvdb: flush diskcache: enabled; persistent grants: enabled; indirect descriptors: enabled;
[ 1.019880] blkfront: xvdc: flush diskcache: enabled; persistent grants: enabled; indirect descriptors: enabled;
[ 1.031687] blkfront: xvdd: flush diskcache: enabled; persistent grants: enabled; indirect descriptors: enabled;
[ 1.105659] xvdc: xvdc1
[ 1.152834] EXT4-fs (xvda3): mounted filesystem with ordered data mode. Opts: (null)
[ 1.161403] EXT4-fs (xvdd): mounting ext3 file system using the ext4 subsystem
[ 1.164350] EXT4-fs (xvdd): mounted filesystem with ordered data mode. Opts: (null)
[ 1.173317] EXT4-fs (xvda3): re-mounted. Opts: (null)
[ 1.184075] clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x6a8f3c48a1e, max_idle_ns: 881591127766 ns
[ 1.207129] EXT4-fs (xvda3): re-mounted. Opts: (null)
This is inside a VM (virtual machine) under Qubes OS 4.0, a so called AppVM running Fedora 28 with kernel 4.18.5 (manually compiled).
The message(Magic number: 1:252:3141) is gone if I merge all these options on top of these base ones.
Why do I need to know this? I am reducing my kernel .config to only what's needed, so I'm comparing dmesg logs for anything that is missing and I might still need.
Note: there is no output for lspci or lsusb inside this VM (I don't know why), maybe this is how it works for VMs under xen? So I couldn't test to see if those numbers grep to anything - and it's nothing else in the dmesg itself.
EDIT: I recompiled kernel with only CONFIG_HID_MAGICMOUSE=y (changed from # CONFIG_HID_MAGICMOUSE is not set) and the message did not appear! So I conclude that either this is not the option, OR I also turned off some debug messages that were previously on?
CONFIG_HID_MAGICMOUSE:
Support for the Apple Magic Mouse/Trackpad multi-touch.
Say Y here if you want support for the multi-touch features of the
Apple Wireless "Magic" Mouse and the Apple Wireless "Magic" Trackpad.
Symbol: HID_MAGICMOUSE [=n]
Type : tristate
Prompt: Apple Magic Mouse/Trackpad multi-touch support
Location:
-> Device Drivers
-> HID support
-> HID bus support (HID [=y])
-> Special HID drivers
Defined at drivers/hid/Kconfig:561
Depends on: INPUT [=y] && HID [=y]
|
The message(Magic number: 1:252:3141) is gone if I merge all these options on top of these base ones.
The message Magic number: 1:252:3141 appears only when kernel .config option CONFIG_PM_TRACE_RTC=y:
CONFIG_PM_TRACE_RTC:
This enables some cheesy code to save the last PM event point in the
RTC across reboots, so that you can debug a machine that just hangs
during suspend (or more commonly, during resume).
To use this debugging feature you should attempt to suspend the
machine, reboot it and then run
dmesg -s 1000000 | grep 'hash matches'
CAUTION: this option will cause your machine's real-time clock to be
set to an invalid time after a resume.
Symbol: PM_TRACE_RTC [=y]
Type : bool
Prompt: Suspend/resume event tracing
Location:
-> Power management and ACPI options
Defined at kernel/power/Kconfig:218
Depends on: PM_SLEEP_DEBUG [=y] && X86 [=y]
Selects: PM_TRACE [=y]
Documentation mentioning Magic number:: https://www.kernel.org/doc/Documentation/power/s2ram.txt
Quoting the following from the file(drivers/base/power/trace.c) whose code is responsible for the Magic number: 1:252:3141 message:
/*
* drivers/base/power/trace.c
*
* Copyright (C) 2006 Linus Torvalds
*
* Trace facility for suspend/resume problems, when none of the
* devices may be working.
*/
...
/*
* Horrid, horrid, horrid.
*
* It turns out that the _only_ piece of hardware that actually
* keeps its value across a hard boot (and, more importantly, the
* POST init sequence) is literally the realtime clock.
*
* Never mind that an RTC chip has 114 bytes (and often a whole
* other bank of an additional 128 bytes) of nice SRAM that is
* _designed_ to keep data - the POST will clear it. So we literally
* can just use the few bytes of actual time data, which means that
* we're really limited.
*
* It means, for example, that we can't use the seconds at all
* (since the time between the hang and the boot might be more
* than a minute), and we'd better not depend on the low bits of
* the minutes either.
*
* There are the wday fields etc, but I wouldn't guarantee those
* are dependable either. And if the date isn't valid, either the
* hw or POST will do strange things.
*
* So we're left with:
* - year: 0-99
* - month: 0-11
* - day-of-month: 1-28
* - hour: 0-23
* - min: (0-30)*2
*
* Giving us a total range of 0-16128000 (0xf61800), ie less
* than 24 bits of actual data we can save across reboots.
*
* And if your box can't boot in less than three minutes,
* you're screwed.
*
* Now, almost 24 bits of data is pitifully small, so we need
* to be pretty dense if we want to use it for anything nice.
* What we do is that instead of saving off nice readable info,
* we save off _hashes_ of information that we can hopefully
* regenerate after the reboot.
*
* In particular, this means that we might be unlucky, and hit
* a case where we have a hash collision, and we end up not
* being able to tell for certain exactly which case happened.
* But that's hopefully unlikely.
*
* What we do is to take the bits we can fit, and split them
* into three parts (16*997*1009 = 16095568), and use the values
* for:
* - 0-15: user-settable
* - 0-996: file + line number
* - 0-1008: device
*/
| Which kernel .config option causes "Magic number: 1:252:3141" on dmesg? |
1,574,272,699,000 |
Evolution 3.28.0-4 on Debian Testing.
I did a restore of my evolution-config, copied ~/.config/evolution from backup to my machine. The email-accounts and all the emails are fine.
For each account there is a signature assigned - I can see this in settings. But the signatures are not attached to the emails I write.
In ~/.config/evolution/signatures I see four files (hashes as names) with html-code of signatures inside.
Any ideas how to "activate" those signatures, to be attached to my emails?
Best regards
Johannes
|
Solved!
The signature-files had wrong file-permissions. They had 755 instead of 644!
Just did a chmod 644 056xxxxxxxxxxxxxxxxxxx and the signature worked immedently.
Best regards
| Evolution - imported signatures not used |
1,574,272,699,000 |
On my CentOS virtual machine I set up an Apache server.
I followed this link: How to set up Apache server.
Then everything worked fine as expected; however,
when I removed the example.com entry from the /etc/hosts file, I expected it to reach for /var/www/html instead of example.com still however it still reached only for example.com index html when I reach for my local IP address.
I just want to understand when should it prefer to reach out for folders like example.com and leave out /var/www/html/.
It confused me a bit. I want to know when is this one used and when is the other used as in /var/www/example.com and /var/www/html respectively.
|
Well i got what happened in case anyone faces the same issue,
In the /etc/httpd/conf/httpd.conf i included the virtual hosts file in /etc/httpd/conf.d which specified the listen id for
<VirtualHost *:80>
Which means by this documentation VirtualHosts Common Questions that it will listen on all the ips, not letting the main server in /var/www/html serve any requests.
And because "example.com" is declared before "example2.com" (assuming that '.' comes before '2' in folder organization) it has the highest priority to serve the requests.
For example if we were to change "example2.com" to "aexample2.com" it will have a higher priority because it will be declared first due to begin read first from the folder /etc/httpd/sites-enabled because 'a' comes before 'e' in alphabetical order (As my tests showed).
| CentOS Apache server question [closed] |
1,574,272,699,000 |
I'm using the Version 2 of freeRADIUS. I've successfully changed the default eap type.
Now I'm trying to change the inner auth because I need pap as default.
I've tried to change the inner auth for ttls but then this happens:
ttls {
default_eap_type = "pap"
copy_request_to_tunnel = no
use_tunneled_reply = no
virtual_server = "inner-tunnel"
include_length = yes
}
rlm_eap_ttls: Unknown EAP type pap
rlm_eap: Failed to initialize type ttls
/usr/local/etc/raddb/eap.conf[17]: Instantiation failed for module "eap"
/usr/local/etc/raddb/sites-enabled/default[310]: Failed to load module "eap".
/usr/local/etc/raddb/sites-enabled/default[252]: Errors parsing authenticate section.
I've tried to change the inner auth for peap as well, but same problem as before:
peap {
default_eap_type = "pap"
copy_request_to_tunnel = no
use_tunneled_reply = no
proxy_tunneled_request_as_eap = yes
virtual_server = "inner-tunnel"
soh = no
}
rlm_eap_peap: Unknown EAP type pap
rlm_eap: Failed to initialize type peap
/usr/local/etc/raddb/eap.conf[17]: Instantiation failed for module "eap"
/usr/local/etc/raddb/sites-enabled/default[310]: Failed to load module "eap".
/usr/local/etc/raddb/sites-enabled/default[252]: Errors parsing authenticate section.
Why it doesn't recognize pap? Thank you.
|
The issue is that PAP isn't an EAP type. PAP is an authentication type.
EAP-TTLS is the only widely used EAP type which can use a PAP inner, so I'm assuming you're using that.
When the server processes EAP-TTLS, it extracts the attributes inside EAP-TTLS' TLS tunnel and creates RADIUS attributes from them. It then 'proxies' a request containing these attributes (possibly merged with those from the RADIUS packet carrying EAP), and sends them to another virtual server (the default being 'inner-tunnel').
In order to perform PAP authentication in the inner tunnel, you need to set up PAP as you would for RADIUS.
etc/raddb/sites-available/inner-tunnel
server inner-tunnel {
authorize {
ldap | sql | files | whichever module you use to retrieve passwords
pap
}
authenticate {
pap
}
}
Note: You also need to select PAP as the inner method on the supplicant. There's no way to negotiate whether the supplicant uses an inner EAP method, or does PAP/CHAP/MSCHAPv2 using tunnel attributes. The supplicant sends attributes, the server goes along with whatever it sends. If the supplicant sends an EAP-Message attribute and the EAP module is configured, the server will do EAP. If the supplicant sends a User-Password attribute and the PAP module is configured the server will do PAP.
This is distinct from EAP, where the supplicant and server can negotiate the EAP method. There's a bunch of example exchanges in RFC5281 where you can see the different attributes being sent.
| freeRADIUS: Error when changing inner auth |
1,574,272,699,000 |
I am running a sendmail server on CentOS 6.8. For MTA connections on port25 I want to use tcpwrappers to reject host with no PTR DNS record.
so my hosts.allow looks like :
sendmail: ALL EXCEPT UNKNOWN
My problem is the mail submission port on 587 seems to share this setting. The result is that roaming users (mostly on US Cellular) who don't have a PTR record for their current IP address get rejected before they can authenticate.
I can fix this by setting up
sendmail: ALL
in hosts allow, but this about triples the number of garbage connections from spammers on port 25.
Does anyone know a way to make sendmail call libwrap for port 25 connections but not for port 587 connections that will be authenticated ?
Thanks!
|
tcp_wrappers (last stable release: 1997) dates to an awkward phase of the Internet when OS and applications generally lacked suitable protections; since then OS now ship with firewalls by default and applications have all sorts of business logic available (features and milters in the case of sendmail) to keep the spammers to a dull roar. tcp_wrappers is problematic here as it is a single library, so would need two distinct versions of sendmail and probably some patching of sendmail for one to use the library via sendmail and the other sendmailmsp.
In this case sendmail has suitable features that will reject connections without rdns but allow relay to authenticated connections via the following sendmail.mc defines (see cf/README under the source for details on these, and how to rebuild sendmail.cf):
FEATURE(`delay_checks')dnl
FEATURE(`require_rdns')dnl
(Lacking such, the next option would be to carry out the necessary business logic via a milter.) Note that the next expected move from the spammers would be to break an authenticated account and spam via that, so log monitoring, rate throttling, and so forth may need to be in place to limit and detect such.
| Sendmail 8.14.4 on CentOS 6.8 tcpwrappers problem |
1,484,847,588,000 |
I've been using Hosting cloud-config using nginx as an example:
There's a config that you need to put into nginx config file:
location ~ ^/user_data {
root /path/to/cloud/config/files;
sub_filter $public_ipv4 '$remote_addr';
sub_filter $private_ipv4 '$http_x_forwarded_for';
# sub_filter $private_ipv4 '$http_x_real_ip';
sub_filter_once off;
sub_filter_types '*';
}
However, when I do that, nginx -t gives:
nginx: [emerg] unknown "public_ipv4" variable
nginx: configuration file /etc/nginx/nginx.conf test failed
How do I fix this?
I'm using nginx 1.10.1 compiled with http_sub_module.
|
Right...so I really don't know who writes docs for CoreOS, but how can you make such a mistake, when the problem's been around for ages!
Basically, google 'nginx escape variable' and you'll get there.
https://github.com/openresty/nginx-tutorials/blob/master/en/01-NginxVariables01.tut
Here's a copy if the site goes down:
geo $dollar {
default "$";
}
server {
listen 8080;
location ~ ^/user_data {
root /path/to/cloud/config/files;
sub_filter ${dollar}public_ipv4 '$remote_addr';
sub_filter ${dollar}private_ipv4 '$http_x_forwarded_for';
# sub_filter ${dollar}private_ipv4 '$http_x_real_ip';
sub_filter_once off;
sub_filter_types '*';
}
}
| CoreOS - Hosting cloud-config using nginx |
1,484,847,588,000 |
I'm on Arch Linux and using i3 as DE. I have an Elantech touchpad, shown on the list below. My computer is the Asus E403SA-WX0004T.
The touchpad just stops working. Suddenly, at any given time, it may be disabled and can't be used anymore until the next reboot (if I plug in a mouse, this one works fine, but does not re-enable the touchpad whatsoever). Its entry in xinput is still unchanged though, still recognized, but disabled for some reason.
Output of $ xinput list:
⎡ Virtual core pointer id=2 [master pointer (3)]
⎜ ↳ Virtual core XTEST pointer id=4 [slave pointer (2)]
⎜ ↳ Elan Touchpad id=10 [slave pointer (2)]
⎣ Virtual core keyboard id=3 [master keyboard (2)]
↳ Virtual core XTEST keyboard id=5 [slave keyboard (3)]
↳ Power Button id=6 [slave keyboard (3)]
↳ Video Bus id=7 [slave keyboard (3)]
↳ Sleep Button id=8 [slave keyboard (3)]
↳ USB Camera id=9 [slave keyboard (3)]
↳ Asus WMI hotkeys id=11 [slave keyboard (3)]
↳ AT Translated Set 2 keyboard id=12 [slave keyboard (3)]
Content of the /etc/X11/xorg.conf.d/30-touchpad.conf:
Section "InputClass"
Identifier "touchpad catchall"
MatchIsTouchpad "on"
Driver "synaptics"
MatchDevicePath "/dev/input/event4"
Option "Tapping" "on"
EndSection
(Its event number changed from 5 to 4 since the last time I checked. If I may ask another question, is there any way to always keep the right number? I have no clue if it can be the solution to this problem.)
Don't hesitate to ask for whatever is missing, I'll gladly add whatever you feel necessary.
|
This problem has disappeared since long ago now, I assume it has been corrected in a previous kernel version.
| Touchpad stops working (Arch) |
1,484,847,588,000 |
I'm working on a embedded system. I have multiple SD cards to save copy of Linux rootfs on (kernel saved in nand). On a original SD card, where is located a system, and from this card the system is copied to another - everything works nice. Init service is working as it should.
But there is a problem on copied system on another SD cards - system is working, but it's not turning on init service, where is located for example network, sshd init, needed for a application.
Two things - when I was copying system not all of files wanted to copy (especially from /dev/, but it is normal, beacuse of aim of this files). But maybe another files weren't coppied properly?
Second thing - i'm mounting:
/var
/tmp
/var/tmp
On tmpfs (RAM) - but i think it's not a problem (it's working good on original SD card).
Maybe I shouldn't copy rootfs, and do something else?
|
Had to do some copy/paste things. First, I've downloaded minimal ELDK distribution (i'm using it), copied all with rsync. Next i've rsynced copy of system and copied it on SD card on fresh system. All worked.
| Embedded Linux and Init problem - Init won't start |
1,484,847,588,000 |
I use packages to configure my system. Each package depends on the application packages that it configures, and contains the configuration files. (The repository lives at https://github.com/majewsky/system-configuration.)
My impression is that building configuration packages using the native packaging tools (in my case PKGBUILD files, since I'm on Arch Linux) is unnecessarily cumbersome. To package a configuration file with a single line of content, I have to put the file in my repository, reference the file in the PKGBUILD's sources, and install the file in the PKGBUILD's install() routine. (This is not specific to Arch Linux, I just used it as an example since that's what I use ATM.)
Are there options for streamlining this process? Something like a package description format targeted at configuration packages, where you have a single description file containing all the dependencies and configuration files, which can be processed into a configuration package with a single command.
[EDIT: To clarify, I'm not looking for any of the classic configuration management tools, like Puppet/Chef/Ansible/Salt/cfEngine. I consider every one of these too fat and built my own, Holo (http://holocm.org) which extends arbitrary system package managers to also handle configuration. The only thing that I find lacking in this approach is that it is relatively tedious to set up a build process for system packages that only contain a handful static files.]
If the answer is no, I'll build it myself, but I figure some investigation into prior art is useful. My googling didn't turn up anything useful so far.
|
For the sake of not leaving unanswered questions lying around, I shall note that I wrote https://github.com/holocm/holo-build and https://github.com/majewsky/art which together make for a nice package building workflow.
| Is there a nice packaging tool for configuration packages? |
1,484,847,588,000 |
How do I get a graphic display of my current KDE 5 configuration to help send a report?
One with a graphic display which allows me to copy a text report for mailing will be fine.
|
The kinfocenter command pops up a screen which details the system configuration. It doesn't have a copy to file option, but it gives enough details to be typed up and sent.
| How do I get a graphic display of my current KDE 5 configuration to help send a report? |
1,484,847,588,000 |
What is the right way to extend MANPATH in Solaris? Currently it is empty for a user session, so I get to do it manually in $HOME/.bash_profile:
MANPATH=/usr/man:/usr/share/man:/usr/sfw/man:/opt/solarisstudio12.3/man
export MANPATH
But is this right? On linux, for instance I could update /etc/manpath.config and this would be visible to all users; or man -w would help figure out man pages location. I don't know how I could do this on Solaris.
|
http://docs.oracle.com/cd/E23824_01/html/E24456/userenv-1.html
Looks like we don't really need MANPATH at all, simply adjust PATH and the man commands would do the rest.
| OracleSolaris 11.2 - setting MANPATH variable |
1,484,847,588,000 |
I am trying to change the default root directory in Apache webserver but having trouble accessing it from webbrowser.
Disabled SELinux
Stopped IPTables
Trying to make : "/var/www/test/" as default folder
In cd /etc/httpd/conf/httpd.conf : I am making following changes:
292 DocumentRoot "/var/www/test"
302 <Directory /var/www/test>
303 Options FollowSymLinks
304 AllowOverride None
305 Order allow,deny
306 Allow from all
307 </Directory>
317 # This should be changed to whatever you set DocumentRoot to.
318 #
319 <Directory "/var/www/test">
Created folder test under: /var/www/ and gave chmod -R 755 /var/www/test/
Restarted httpd and restarted with no errors.
Here are few instances of logs:
Access Logs:
192.168.1.18 - - [10/Jun/2015:18:27:00 -0500] "GET / HTTP/1.1" 304 - "-" "Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.124 Safari/537.36"
192.168.1.18 - - [10/Jun/2015:18:27:04 -0500] "GET / HTTP/1.1" 304 - "-" "Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.124 Safari/537.36"
192.168.1.18 - - [10/Jun/2015:18:27:40 -0500] "GET / HTTP/1.1" 304 - "-" "Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.124 Safari/537.36"
192.168.1.18 - - [10/Jun/2015:18:27:47 -0500] "GET / HTTP/1.1" 304 - "-" "Mozilla/5.0 (Windows NT 6.3; WOW64; Trident/7.0; LCJB; rv:11.0) like Gecko"
192.168.1.18 - - [10/Jun/2015:18:27:47 -0500] "GET / HTTP/1.1" 304 - "-" "Mozilla/5.0 (Windows NT 6.3; WOW64; Trident/7.0; LCJB; rv:11.0) like Gecko"
192.168.1.18 - - [10/Jun/2015:18:31:06 -0500] "GET / HTTP/1.1" 304 - "-" "Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.124 Safari/537.36"
192.168.1.18 - - [10/Jun/2015:18:31:09 -0500] "GET / HTTP/1.1" 304 - "-" "Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.124 Safari/537.36"
192.168.1.18 - - [10/Jun/2015:18:31:27 -0500] "GET / HTTP/1.1" 304 - "-" "Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.124 Safari/537.36"
192.168.1.18 - - [10/Jun/2015:18:45:29 -0500] "GET / HTTP/1.1" 304 - "-" "Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.124 Safari/537.36"
192.168.1.18 - - [10/Jun/2015:19:00:10 -0500] "GET / HTTP/1.1" 304 - "-" "Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.124 Safari/537.36"
192.168.1.18 - - [10/Jun/2015:19:00:12 -0500] "GET / HTTP/1.1" 304 - "-" "Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.124 Safari/537.36"
192.168.1.18 - - [10/Jun/2015:19:00:13 -0500] "GET / HTTP/1.1" 304 - "-" "Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.124 Safari/537.36"
192.168.1.18 - - [10/Jun/2015:19:06:33 -0500] "GET / HTTP/1.1" 304 - "-" "Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.124 Safari/537.36"
192.168.1.18 - - [10/Jun/2015:19:06:37 -0500] "GET / HTTP/1.1" 304 - "-" "Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.124 Safari/537.36"
192.168.1.18 - - [10/Jun/2015:19:26:22 -0500] "GET / HTTP/1.1" 304 - "-" "Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.124 Safari/537.36"
~
Error Logs:
[Wed Jun 10 18:45:20 2015] [notice] Digest: generating secret for digest authentication ...
[Wed Jun 10 18:45:20 2015] [notice] Digest: done
[Wed Jun 10 18:45:20 2015] [notice] Apache/2.2.15 (Unix) DAV/2 configured -- resuming normal operations
[Wed Jun 10 18:45:25 2015] [notice] caught SIGTERM, shutting down
[Wed Jun 10 18:45:26 2015] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)
[Wed Jun 10 18:45:26 2015] [notice] Digest: generating secret for digest authentication ...
[Wed Jun 10 18:45:26 2015] [notice] Digest: done
[Wed Jun 10 18:45:26 2015] [notice] Apache/2.2.15 (Unix) DAV/2 configured -- resuming normal operations
[Wed Jun 10 18:59:07 2015] [notice] caught SIGTERM, shutting down
[Wed Jun 10 19:00:05 2015] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)
[Wed Jun 10 19:00:05 2015] [notice] Digest: generating secret for digest authentication ...
[Wed Jun 10 19:00:05 2015] [notice] Digest: done
[Wed Jun 10 19:00:05 2015] [notice] Apache/2.2.15 (Unix) DAV/2 configured -- resuming normal operations
[Wed Jun 10 19:00:08 2015] [notice] caught SIGTERM, shutting down
[Wed Jun 10 19:00:09 2015] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)
[Wed Jun 10 19:00:09 2015] [notice] Digest: generating secret for digest authentication ...
[Wed Jun 10 19:00:09 2015] [notice] Digest: done
[Wed Jun 10 19:00:09 2015] [notice] Apache/2.2.15 (Unix) DAV/2 configured -- resuming normal operations
[Wed Jun 10 19:24:44 2015] [notice] caught SIGTERM, shutting down
[Wed Jun 10 19:24:45 2015] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)
[Wed Jun 10 19:24:45 2015] [notice] Digest: generating secret for digest authentication ...
[Wed Jun 10 19:24:45 2015] [notice] Digest: done
[Wed Jun 10 19:24:45 2015] [notice] Apache/2.2.15 (Unix) DAV/2 configured -- resuming normal operations
|
I would like to write this as a comment instead of an answer, but I can't yet so please don't ding me, community.
There is nothing wrong with the pieces you have posted so far. The error logs show the normal notices from restarting apache and the 304 return codes in the access logs you posted are normal as well; they indicate that the resource being requested hasn't changed from what the browser has cached so a retransmit of the document isn't necessary.
I spun up a cloud server and made the changes exactly as you have specified and I get my index.html with the text 'test' from /var/www/test. Making apache the owner of the directory isn't necessary as the world permissions allow reading which is good enough.
So when you say you are having trouble accessing it, what precisely do you see when you try to hit the site? The logs show a private IP which implies you are running apache on the same machine or local network; are you hitting the IP directly?
| how to change default root in apache web server? |
1,484,847,588,000 |
I've been searching around on Google, but I cannot find my answers to doing a specific thing with IRC and in Irssi, so I've come here to see if I can get any answers.
Okay first… in IRC using Irssi, when I'm connected to a IRC network, how can I display what channels are currently available on that network I'm connected to?
I have tried doing /channel list
but this doesn't show me anything what I'm looking for, all it shows is this;
Channel Network Password Settings
#irssi IRCnet
when I run /lsusers I get this;
*** There are 655 users and 1662 invisible on 9 servers
*** 16 operator(s) online
*** 1245 channels formed
*** I have 383 clients and 3 servers
*** 383 1500 Current local users 383, max 1500
*** 2317 6083 Current global users 2317, max 6083
how can I list to see those channel names it shows there like it would in other IRC clients like Xchat or mIRC?
also I would like someone to be able to clarify something since the irssi guides on commands from the Arch Wiki aren't quite too clear, particularly on the VHosts section --> https://wiki.archlinux.org/index.php/irssi#Virtual_hostname_.28vhost.29
tell if I'm close or on the right track here about setting up and using vhosts with Irssi, it says I should set my virtual hostname inside of /etc/hosts if I want to use VHosts on Irssi, is this correct? I tried doing just that, but when I try to connect to an irc network I would get
-!- Irssi: Unable to connect server irc.example.net port 6667 [Invalid argument]
and refuses to connect at all. In my current /etc/hosts file I have 2 entries as default for my local hostname, plus defaults for IPv6;
127.0.0.1 localhost
127.0.1.1 ASUS
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
so if I wanted to set my vhost I would have to replace ASUS that my current default uses to something else to be able to use VHost in Irssi am I correct?
|
How to determine which #channels are available on a server
/list
How to set a vhost
vhosts are server-specific. You will need to check the documentation for the server on which you're interested in setting a vhost. vhosts can also be referred to as cloaks. You do not set this up on your machine, but instead request it from the irc server.
Some servers allow setup of a vhost via Nickserv or another service, or require you to request a vhost/cloak from a server operator.
Freenode Cloaks
| IRC and Irssihelp |
1,484,847,588,000 |
I installed Apache through yum install httpd in Fedora. When I try to start the service it shows the following error:
[root@localhost ~]# systemctl enable httpd.service
ln -s '/usr/lib/systemd/system/httpd.service' '/etc/systemd/system/multi-user.target.wants/httpd.service'
[root@localhost ~]# systemctl start httpd.service
Job for httpd.service failed. See 'systemctl status httpd.service' and 'journalctl -xn' for details.
[root@localhost ~]# systemctl status httpd.service
httpd.service - The Apache HTTP Server
Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled)
Active: failed (Result: exit-code) since Sat 2014-05-17 21:19:25 IST; 1h 13min ago
Process: 2622 ExecStop=/usr/sbin/httpd $OPTIONS -k graceful-stop (code=exited, status=1/FAILURE)
Process: 2620 ExecStart=/usr/sbin/httpd $OPTIONS -DFOREGROUND (code=exited, status=1/FAILURE)
May 17 21:19:25 localhost.localdomain systemd[1]: Starting The Apache HTTP Se...
May 17 21:19:25 localhost.localdomain httpd[2620]: httpd: Syntax error on lin...
May 17 21:19:25 localhost.localdomain systemd[1]: httpd.service: main process...
May 17 21:19:25 localhost.localdomain httpd[2622]: httpd: Syntax error on lin...
May 17 21:19:25 localhost.localdomain systemd[1]: httpd.service: control proc...
May 17 21:19:25 localhost.localdomain systemd[1]: Failed to start The Apache ...
May 17 21:19:25 localhost.localdomain systemd[1]: Unit httpd.service entered ...
[root@localhost ~]#
|
Obviously there is a syntax error in Apache's configuration file.
| httpd start fails and shows up error |
1,484,847,588,000 |
I am giving Elementary OS a try but trying to configure it a bit I see lesser options than supposed in dconf-editor. Following this source, they should look like this:
Mine look like this:
So, I can neither follow the instructions from here.
Why is that?
Is this related to the latest release? A remedy?
|
I found a relevant question on the Elementary OS help pages, one of their moderators said that the current version cannot draw the Desktop and that the suggested workaround is to install nautilus and then activate the Desktop through the dconf-editor pretty much as you have attempted to do.
I presume that installing nautilus will make the relevant entry appear in dconf-editor. They have a nice howto here.
| Why my dconf-editor has fewer options in Elementary OS Luna than supposed to? |
1,484,847,588,000 |
I've recently switched to using Slackware 14 on my laptop, so far I'm quite happy with the distro, except for 1 really annoying little thing: I can surf using hotspots, and all sorts of public wifi-access-points, but I can't seem to get on-line at home.
Prior to running Slackware, I was using Debian, so yes, my laptop has been on my home network, without mac-spoofing or anything.
Currently, I've setup my wlan interface as eth1, and added these lines in my /etc/rc.d/rc.local:
wpa_supplicant -B -Dwext -ieth1 -c/etc/wpa_supplicant.conf
Which does the trick, it seems, using something like wpa_cli or wpa_gui, I can easily connect to my home network.
I therefore changed the wpa_supplicant file a bit, adding:
network={
ssid="HomeSweetHome"
psk=0123464sdasd4d56agr6 #output from wpa_passpharse HomeSweetHome mypassphrase
key_mgmt=WPA2-PSK #and so on
}
But no matter what I do (use settings above, or connect manually) I can connect to other machines on the LAN, but as soon as I try to google something, nothing happens. Constantly "Waiting for siteX" is all I get.
Does anybody here have any idea as to what I'm missing here? There has to be something I haven't configured as supposed to here... I can't think of anything ATM, though.
Update:
Yes, I can ping 8.8.8.8, no problem. I can add network locations and share files with the other computers in the network, too.
Output of ifconfig eth1:
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.1.64 netmask 255.255.255.0 broadcast 192.168.1.255
inet6 fe80::213:ceff:fef1:5267 prefixlen 64 scopeid 0x20<link>
ether 00:13:ce:f1:52:67 txqueuelen 1000 (Ethernet)
RX packets 491 bytes 57950 (56.5 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 75 bytes 10228 (9.9 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device interrupt 17 base 0x6000 memory 0xdfcff000-dfcfffff
Output of route -n:
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.1.1 0.0.0.0 UG 303 0 0 eth1
127.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 lo
192.168.1.0 0.0.0.0 255.255.255.0 U 303 0 0 eth1
ping google.com: All went well, 0% packet loss, on average 20ms/packet.
tcptraceroute isn't installed at the moment, but I'll set it up in due time. For now, here's the output of traceroute -n 8.8.8.8
traceroute 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
1 192.168.1.1 1.190 ms 1.872 ms 5.465ms
2 192.168.0.1 5.520 ms 5.699 ms 6.960 ms
3 78.21.0.1 15.007 ms 15.850 ms 17.525 ms
4 * * * *
5 213.224.253.9 27.151 ms 28.096 ms 28.146 ms
# And so on, all the way to:
12 * * * *
13 8.8.8.8 17.921 ms 22678 ms 20.022 ms
|
I've added:
network={
ssid="HomeSweetHome"
psk=0123464sdasd4d56agr6
key_mgmt=WPA2-PSK #and so on
}
To the wpa_supplicant.conf file, what I should've done is just add the raw output of wpa_passphrase HomeSweetHome mypasspharse to the file, not bothering with manually adding settings like key_mgmt and others. Everything is working just fine with this:
network={
ssid="HomeSweetHome"
# psk="mypassphrase"
psk=0123464sdasd4d56agr6
}
| Wifi connection works but can't connect to internet |
1,484,847,588,000 |
I would like to disable the following warning for the root user.
You are trying to start Visual Studio Code as a super user which isn't recommended. If this was intended, please add the argument `--no-sandbox` and specify an alternate user data directory using the `--user-data-dir` argument.
I do not want to have to enter the --no-sandbox argument every time I run a snap. And I don't want to alias all of them. I am well aware of the security risks. I will only be doing this in a VM for personal use.
Is there any way to edit a config file or something to bypass this error?
|
I created this python3 workaround.
#!/usr/bin/python
import os
import sys
if __name__ == "__main__":
if len(sys.argv) <= 1:
print("No snap specified")
exit(-1)
else:
os.system(sys.argv[1] + " --no-sandbox " + ' '.join(sys.argv[2:]))
exit(0)
Add to PATH
Example:
run code --user-data-dir ~/.vscode .
| How to disable snap package warning for root user? [closed] |
1,484,847,588,000 |
Why is 50-redhat.conf listed twice with different file sizes for this directory with this ls:
ls -la /etc/ssh/ssh*
-rw-r--r--. 1 root root 1921 Aug 2 02:58 /etc/ssh/ssh_config
-rw-------. 1 root root 3667 Aug 2 02:58 /etc/ssh/sshd_config
-rw-r-----. 1 root ssh_keys 480 May 20 02:38 /etc/ssh/ssh_host_ecdsa_key
-rw-r--r--. 1 root root 162 May 20 02:38 /etc/ssh/ssh_host_ecdsa_key.pub
-rw-r-----. 1 root ssh_keys 387 May 20 02:38 /etc/ssh/ssh_host_ed25519_key
-rw-r--r--. 1 root root 82 May 20 02:38 /etc/ssh/ssh_host_ed25519_key.pub
-rw-r-----. 1 root ssh_keys 2578 May 20 02:38 /etc/ssh/ssh_host_rsa_key
-rw-r--r--. 1 root root 554 May 20 02:38 /etc/ssh/ssh_host_rsa_key.pub
/etc/ssh/ssh_config.d:
total 8
drwxr-xr-x. 2 root root 28 Aug 4 10:14 .
drwxr-xr-x. 4 root root 4096 Aug 4 10:14 ..
-rw-r--r--. 1 root root 581 Aug 2 02:58 50-redhat.conf
/etc/ssh/sshd_config.d:
total 8
drwx------. 2 root root 28 Aug 4 10:14 .
drwxr-xr-x. 4 root root 4096 Aug 4 10:14 ..
-rw-------. 1 root root 719 Aug 2 02:58 50-redhat.conf
|
The two files are different. The one occurring in ssh_config.d contains SSH client configuration. The one found in sshd_config.d contains SSH server configuration.
| Why does ls show this file twice with different sizes? |
1,484,847,588,000 |
I've read on ssh, and I have my setup to use ssh with OpenWrt, I use key in .ssh + my config file in .ssh but if I want to use also ssh server as localhost, what I've read is that I should setup also in .ssh/config.
I'm not sure how to use both?
could you help me?
thank you
|
.ssh/config in your home directory has normally nothing to do with the SSH server (sshd), only with the SSH client.
The SSH server will normally read .ssh/authorized_keys under your home directory if it exists. This pathname can be changed in the system-wide SSH server configuration file (usually /etc/ssh/sshd_config, although that is also changeable when building the SSH suite from source code).
| SSH config /localhost and remote? |
1,484,847,588,000 |
I've recently installed the raw virtual image of FreeBSD 12.1-STABLE onto a VPS provided through OVH. I'm able to login via the KVM on my control panel, but the machine is unreachable by SSH. It turns out that's because the machine isn't able to connect to the internet.
I'm able to ping the interface vtnet0 at its given IPv4 address, but # ping 8.8.8.8 or any # ping6 returns some form of "no route to host".
Here is the output for # ifconfig -a:
And here is the output of # netstat -rn:
And here's the content of /etc/rc.conf:
Section 11.5 of the Free BSD handbook has led me to believe that the NIC is configured, but at this point I'm not really sure what to change or investigate to get online. Any guidance is appreciated.
Update (5 Apr. 2020): I've managed to get my VPS online, thanks largely to the contributor with the selected answer, help from the FreeBSD forums, and this article by Tim Chase.
My solution eventually entailed reinstalling Debian, the running # apt install network-manager followed by # nmcli device show ${INTERFACE_NAME} to get the necessary network information assigned to my machine by OVH by default, then reinstalling FreeBSD and configuring /etc/rc.conf like so:
ifconfig_vtnet0="inet $EXTERNAL_IPv4 netmask 255.255.255.255 broadcast $EXTERNAL_IPv4"
static_routes="net1 net2"
route_net1="$GATEWAY_IPv4 -interface vtnet0"
route_net2="default $GATEWAY_IPv4 "
ifconfig_vtnet0_ipv6="inet6 $EXTERNAL_IPv6 prefixlen 64"
ipv6_defaultrouter="$GATEWAY_IPv6"
|
OVH configuration is actually quite simple, and is the same for IPv4 and IPv6:
You apply the given IPv4 address with a /32 prefix length, or the given IPv6 address with a /64 prefix length, to the network interface, statically.
You set the default IPv4 and IPv6 gateways to the IPv4/IPv6 addresses that OVH specifies, which are determined in predictable and documented ways from the /24 prefix of your host IPv4 address or the /56 prefix of your IPv6 address.
You set up a static route telling your machine how to route to the default gateway's IP address.
The third part is important. The OVH-supplied IP gateway is (intentionally) not in the same subnets as your own IPv4/IPv6 addresses, so that your LAN IP broadcast traffic is excluded. Effectively, the connection between you and the rest of Internet is a two-host LAN where the second host is not implicitly routable (by IPv4 mechanisms), so has to have an explicit route.
This is the same for all operating systems, and simply varies in the ways that individual operating systems set it up. In "OpenBSD with only a /32 repeatedly deletes its static route to the world", as you can see, the OpenBSD way is the normal default IP gateway stuff plus an extra route in /etc/hostname.vio0.
The FreeBSD way is settings in /etc/rc.conf:
a ifconfig_vtnet0 that statically assigns the IPv4 and IPv6 addresses, with an IPv4 netmask 255.255.255.255 or an IPv6 prefixlen 64
defaultrouter and IPv6_defaultrouter settings giving the IP addresses of the respective gateways
(for IPv4, since IPv6 can discover the route) a static route to the gateway, configured as (for example) wibble added to the value of static_routes and a (consequently named) route_wibble setting with -net, the gateway address and the IP address of the vtnet0 interface
Notice that DHCP is not involved. You can retain that ifconfig_DEFAULT, as long as you have a specific ifconfig_vtnet0 that supersedes it.
Further reading
man rc.conf
Configuring IPv6 on dedicated servers. OVH.
coltondrg? (2017-04-20). FreeBSD VMs on your OVH hypervisor. wpa.coltondrg.xyz.
| FreeBSD on VPS unable to connect to the internet; interface appears configured |
1,484,847,588,000 |
From bash manual
Invoked by remote shell daemon
Bash attempts to determine when it is being run with its standard
input connected to a network connection, as when executed by the
remote shell daemon, usually rshd, or the secure shell daemon sshd. If
Bash determines it is being run in this fashion, it reads and executes
commands from ~/.bashrc, if that file exists and is readable. It
will not do this if invoked as sh. The --norc option may be used to
inhibit this behavior, and the --rcfile option may be used to force
another file to be read, but neither rshd nor sshd generally invoke
the shell with those options or allow them to be specified.
Is a shell provided by ssh username@server login or nonlogin?
If it is a login shell, why doesn't it executes commands from ~/.profile, but from ~/.bashrc according to the bash manual?
Thanks.
My OS is Ubuntu 16.04, but the bash manual isn't OS specific.
|
SSH starts a login shell, as alluded to in its manpage:
If a command is specified, it is executed on the remote host instead of a login shell.
You can verify this within Bash with
shopt login_shell
which will show whether it is running as a login shell.
Bash’s behaviour when started remotely, whether as a login shell or otherwise, is as documented in the section you quote. The behaviour you’re comparing it with is that of interactive shells, and a remote shell adds the .bashrc processing on top of the interactive login shell behaviour.
Note that Ubuntu systems typically have a .bash_profile script, which takes priority over .profile, and they typically have .bash_profile source .bashrc in any case...
| Is a shell provided by `ssh username@server` login, and does it execute ~/.profile or ~/.bashrc? [closed] |
1,514,334,085,000 |
Because the ISP blocks port 80 this prevents my running a web server. As a work-around it's possible to specify a different port for Apache? I believe I've seen mention of using port 81, or some lower ports.
Not for production, just mucking around.
|
Assuming a recent version of Linux and Apache.
To accomplish this configuration change, modify
/etc/httpd/conf/httpd.conf
replacing the
Listen 80
directive with a different port.
As far as ports go, I'd recommend higher. Check a list of TCP and UDP port numbers, go high and stay away from known ports.
| running apache web server on an arbitrary port? |
1,514,334,085,000 |
I want someone from windows to log in into my server using SSH, so he can edit files and install things. Is there a step to step how to do it? I need to:
Create his user account.
Configure it, giving him access to a single folder and nothing else (how?)
Generate a public key for him on Windows (how?)
Add his public key to authorized_keys correctly.
Tell him the command he needs to use to actually log in from Window's terminal.
I pretty much only know how to create the account. How to accomplish the latter steps?
|
(2) You may configure sshd to chroot() for this user. See man 5 sshd_config, search for ChrootDirectory and ForceCommand.
(3) You must create a key pair. The public key is used on the server, the private key is used by the client. See ssh-keygen. You may need ssh-keygen -e ... for converting the key so that it is usable by putty but maybe putty can do this conversion itself meanwhile.
(4) This is basically adding a line to a text file:
cat public_key_file >>/path/to/authorized_keys
(5) Your user will have to download the Windows SSH program putty and configure it to use the private key you supplied.
| How to enable someone to SSH to my server from Windows? [closed] |
1,514,334,085,000 |
I'm looking for a listing of general purpose configuration storage subsystems on Linux.
I have tried looking for it in the documentation but I got no further then the standard configuration storage subsystems (file-system, environment variables).
As an example on Windows I know of:
file-system
environment variables
windows registry
I'm looking for a similar listing for Linux.
Thanks in advance.
|
In Linux plain text files are normally used for storing applications and services configuration.
/etc is used for system services.
$HOME/.config/{appname|appname/} is preferred for applications run by a normal user though old apps and utils may use $HOME/{.appname|.appname/}.
Gnome 2 used a system similar to Windows registry, GConf but it's being deprecated.
| Which subsystems exist in Linux for general purpose configuration storage? |
1,514,334,085,000 |
I put "source /etc/profile" in /etc/bash.bashrc and unable to open terminal window in ubuntu 16.04, and now, whenever I try to open a terminal window, it closes after a few seconds. During those few seconds, there is no prompt and it does not except commands.
My /etc/bash.bashrc looks like
# System-wide .bashrc file for interactive bash(1) shells.
# To enable the settings / commands in this file for login shells as well,
# this file has to be sourced in /etc/profile.
# If not running interactively, don't do anything
[ -z "$PS1" ] && return
# check the window size after each command and, if necessary,
# update the values of LINES and COLUMNS.
shopt -s checkwinsize
# set variable identifying the chroot you work in (used in the prompt below)
if [ -z "${debian_chroot:-}" ] && [ -r /etc/debian_chroot ]; then
debian_chroot=$(cat /etc/debian_chroot)
fi
source /etc/profile
alias login="sudo login"
# set a fancy prompt (non-color, overwrite the one in /etc/profile)
PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w\$ '
# Commented out, don't overwrite xterm -T "title" -n "icontitle" by default.
# If this is an xterm set the title to user@host:dir
#case "$TERM" in
#xterm*|rxvt*)
# PROMPT_COMMAND='echo -ne "\033]0;${USER}@${HOSTNAME}: ${PWD}\007"'
# ;;
#*)
# ;;
#esac
# enable bash completion in interactive shells
#if ! shopt -oq posix; then
# if [ -f /usr/share/bash-completion/bash_completion ]; then
# . /usr/share/bash-completion/bash_completion
# elif [ -f /etc/bash_completion ]; then
# . /etc/bash_completion
# fi
#fi
# sudo hint
if [ ! -e "$HOME/.sudo_as_admin_successful" ] && [ ! -e "$HOME/.hushlogin" ] ; then
case " $(groups) " in *\ admin\ *|*\ sudo\ *)
if [ -x /usr/bin/sudo ]; then
cat <<-EOF
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.
EOF
fi
esac
fi
# if the command-not-found package is installed, use it
if [ -x /usr/lib/command-not-found -o -x /usr/share/command-not-found/command-not-found ]; then
function command_not_found_handle {
# check because c-n-f could've been removed in the meantime
if [ -x /usr/lib/command-not-found ]; then
/usr/lib/command-not-found -- "$1"
return $?
elif [ -x /usr/share/command-not-found/command-not-found ]; then
/usr/share/command-not-found/command-not-found -- "$1"
return $?
else
printf "%s: command not found\n" "$1" >&2
return 127
fi
}
fi
This is my /etc/profile
# /etc/profile: system-wide .profile file for the Bourne shell (sh(1))
# and Bourne compatible shells (bash(1), ksh(1), ash(1), ...).
export NODE_REPL_HISTORY=""
unset HISTFILE
set +o history
alias login="sudo login"
if [ "$PS1" ]; then
if [ "$BASH" ] && [ "$BASH" != "/bin/sh" ]; then
# The file bash.bashrc already sets the default PS1.
# PS1='\h:\w\$ '
if [ -f /etc/bash.bashrc ]; then
. /etc/bash.bashrc
fi
else
if [ "`id -u`" -eq 0 ]; then
PS1='# '
else
PS1='$ '
fi
fi
fi
if [ -d /etc/profile.d ]; then
for i in /etc/profile.d/*.sh; do
if [ -r $i ]; then
. $i
fi
done
unset i
fi
I am 99% sure that the reason the terminal closes is the infinite loop I accidentally created, where one calls the other and the other calls the one.
What can I do?
|
The problem is indeed the infinite loop. The shell reads /etc/profile, sees that it needs to read /etc/bash.bashrc and does so, sees that it needs to read /etc/profile and does so, etc. Eventually the shell decides that it's recursing too deeply and gives up.
Press Ctrl+C while the shell is still working its way through the startup files. You'll get a prompt.
Then remove both the inclusion of /etc/bash.bashrc in /etc/profile and the inclusion of /etc/profile in /etc/bash.bashrc. Those files have different roles:
/etc/profile is read at login time and does things like setting environment variables. It is often executed by a shell other than bash.
/etc/bash.bashrc is a configuration file of bash, only for interactive sessions. It should contain things like aliases and prompts. Bash loads it when you run bash in a terminal.
| put "source /etc/profile" in /etc/bash.bashrc and unable to open terminal window in ubuntu 16.04 |
1,514,334,085,000 |
Where can I find what characters are valid in an X11 config file Identifier name?
I'd like to name my identifier libinput touchpad disable tap-to-click but I don't know if hyphens are valid.
I'm running Manjaro Linux.
|
I don’t think there are any particular restrictions, apart from the use of quotes; the configuration file manpage doesn’t mention anything specific. Looking at the parser source code, a string can contain anything apart from double quotes, carriage returns, newlines and the null byte.
You should certainly be able to use hyphens.
| What are valid characters in an X11 .conf Identifier name? |
1,514,334,085,000 |
My structure is this
cdn
- node_modules/
- build/
-- bower_components/
-- css/
-- templates/
-- sst/
--- index.html
How can I configure apache2/symlink/.htaccess to use www.app.com instead of www.app.com/build/sst/ in order to access index.html and maintain folder structure
My current setup is
<VirtualHost *:80>
DocumentRoot /app/www/vhosts/www.app.no/cdn/
ServerName www.app.no
ServerAlias www.app.no js.www.app.no html.www.app.no css.www.app.no
ServerAdmin [email protected]
AddDefaultCharset UTF-8
Include /etc/apache2/local.conf.d/app-restricted-access.conf
ErrorLog /app/www/vhosts/www.app.no/logs/error_log
CustomLog /app/www/vhosts/www.app.no/logs/access_log combined
</VirtualHost>
|
Have you tried DirectoryIndex directive ?
DirectoryIndex index.html index.txt /app/www/vhosts/www.app.no/cdn/build/sst/index.html
from https://httpd.apache.org/docs/current/mod/mod_dir.html
(The drawback is that it will use this index for all directory under www.app.no, unless they have a local index.html or index.txt )
Edit:
mod_dir must be enabled, this can be achieved/check in the LoadModules directive.
search for LoadModule in config file, you must have a line like
LoadModule dir_module /usr/lib/apache2/modules/mod_dir.so
(with proper paths for /usr/lib/apache2/modules )
| My index.html not in DocumentRoot |
1,514,334,085,000 |
Let's edit this theme's gtkrc file:
vi vi /usr/share/themes/industrial/gtk-2.0/gtkrc
gtk-color-scheme = "bg_color: #000000000000\
nfg_color: #ffffff\nbase_color: #000000000000\
ntext_color: #ffffff\nselected_bg_color: #ffffff\
nselected_fg_color: #000000000000\
ntooltip_bg_color: #000000000000\
ntooltip_fg_color: #ffffff
Let's change the bg_color to "red" and apply the theme to the desktop.
But the problem is this, the background of every scroll bar, and the foreground of the scroll bar, both become red, this makes it impossible to see the scroll bar due to the background being same color as the scroll bar.
By scroll bar I am referring to the scroll bar in firefox web browser for example that you'd use to scroll up and down. I am currently using my mouse's wheel button because the scroll bar does not look visible.
Why are colors being applied this way? Why isn't there a distinction between the foreground and the background of the scroll bars?
|
Every element combination of a RGB, if you want to retrieve a RGB of red , blue or another color, you can open gimp, and choose color and retrieve the given RGB.
| gtk-color-scheme = "bg_color: red\ results in red bg + red scroll bar bg + red scroll bar ? where is the logic? |
1,514,334,085,000 |
Without changing the configuration my fish shell started to be really slow, like between 1 and 5 seconds to a simple ls command.
Note that the slow behaviour started suddenly. I have a local install and no admin rights.
I have no set -U var in my config files or other “add to path” problem that could give long (path) variables to process.
I know it is not much to proceed right now but I will add more information if I can.
I managed to pinned down that Oh My Fish is responsible by renaming ~/.config/fish/conf.d/omf.fish, but my ~/.local/share/omf directory contains so much that I need some leads to troubleshoot. I have the bobthefish theme.
The command omf doctor gives
Oh My Fish version: 7
OS type: Linux
Fish version: fish, version 3.1.0
Git version: git version 2.25.1
Git core.autocrlf: no
Checking for a sane environment...
Your shell is ready to swim.
Update: without changing anything everything went back to normal. I suspect a command in the prompt was temporarilly taking a long time and that the server was at fault, not Oh-My-Fish.
|
Turns out fish was not at fault, even if I don't know what was. I'm gessing a call in the prompt that is not on the vanilla prompt. I'll accept this answer after a while in case anyone has a better idea of what might have happened.
| Troubleshooting Oh-My-Fish configuration making fish really slow |
1,514,334,085,000 |
I am trying to install the stress package on CentOS 7 virtual machine running on vmware player on windows 7 laptop. I have tried many packages and have updated yum but whenever I run sudo yum install stress it tells me that there is no such package stress available and there is nothing to do. When I run it from the following download links using yum-config-manager, it tells me that there is no yum-config-manager command found. The download links are
yum-config-manager --add-repo "http://download.opensuse.org/repositories/server:/monitoring/CentOS_CentOS-6/server:monitoring.repo"
/sbin/OCICLI http://software.opensuse.org/ymp/server:monitoring/openSUSE_13.2/stress.ymp?base=openSUSE%3A13.2&query=stress
How can I bypass these and download the stress package successfully? I want to use it to stress the vm but can't get past these hurdles. I am from an Ubuntu background and it was a lot easier on that. Please help.
|
Why are you trying to add an OpenSuSE repo? zypper and yum aren't compatible. If you're trying to get the stress application installed it's in EPEL so you'll have to add the EPEL repo even if you're on CentOS.
| "No package stress available, nothing to do" and "bash: yum-config-manager:command not found |
1,514,334,085,000 |
I am setting up a Debian server through DigitalOcean, which I initialized with a SSH-key. Currently I can log onto the server as root or as a user.
Normally when I do this type of configuration, I uncomment PermitRootLogin yes from /etc/ssh/ssh_config, and change it to PermitRootLogin no. This time, however, I saw a shorter ssh_config that contained no PermitRootLogin.
When I tried to add it in, vim's syntax highlighting didn't recognize it, and restarting sshd didn't have any effect. I looked at the man page for ssh_config and the keyword wasn't listed.
How do I prohibit logging in as root via SSH?
|
The ssh_config file is the default configuration for SSH clients. The server configuration will be found in sshd_config. The PermitRootLogin is a server setting (it modifies the behaviour of the SSH server) that should go in the server configuration file.
| Does OpenSSH no longer support the "PermitRootLogin" keyword? [closed] |
1,514,334,085,000 |
Where are the following configuration files located (e.g., Redhat)?
network settings (static/DHCP/DNS)
timezone
proxy for network
default language
|
Check out the /etc directory. All the config files for your entire system should be there with quite logical names like 'timezone'. Some of the subdirectories are also important such as /etc/sysconfig/ so be sure to browse that. The network stuff is also usually buried in there in a folder called 'interfaces' with one config file per interface.
| Linux configuration settings |
1,514,334,085,000 |
I just had a question regarding the attached part of my /var/log/auth.log file. (Raspberry Pi 2B, running Raspbian Jessy, fully updated / upgraded)
First of all I got rid of the root user all together, but i seem to see a bunch of log-ins for that user (displaying root and/or uid=0), which seem to be locally, as there is no IP-address.
The second question is, why is there an entry for a session of user smmsp every couple minutes.
|
The first two and the last one are CRONs, which is needed for some automated tasks (depend on what is running on your machine sudo crontab -e?
The third is probably you running the tail -f /var/log/auth.log through sudo.
There is nothing to be concerned about. You can't get rid of root user, otherwise you would not be able to do any administrative tasks.
| suspicious entries in /var/log/auth.log |
1,514,334,085,000 |
I've modified my ~/.xinitrc and now the X11 startup fails. I just see a message on the screen saying something like Failed to execute.
I don't have a recovery option in my grub, so I cannot boot into a terminal session. How can I open a terminal and repair my .xinitrc?
|
Switch to a different virtual console, e.g. by pressing Ctrl+Alt+F2; most distributions provide a terminal login on virtual console 2.
If there is no such login, then use any installation live CD/DVD/USB, mount the home partition, and edit your .xinitrc from there.
| Failed to execute, How to enter terminal before login [closed] |
1,341,916,713,000 |
I have a cron job that is scheduled to run every day. Other than changing the schedule, is there any other way to do a test run of the cron job right now to see if it works as intended?
I know the command works fine when enter it in my shell, but I want to know if it works correctly when cron runs it - it could be affected by environment variables, shell-specific stuff (such as ~ expansion), ownership and permissions, or other things.
|
As far as I know there is no way to directly do that as cron has a special purpose - running schedules commands at a specific time. So the best thing is to either to manually create a (temporary) crontab entry or write a script which removes and resets the environment.
Explanation of "removes and resets the environment":
A wrapper script could be started with env -i (which removes the environment), which would source a saved environment (making sure to export all variables, possibly by setting set -a first) before starting your script.
The saved environment would be the default environment of a cron job, recorded by running env (or declare -p depending on what shell your cron jobs use) as a cronjob, saving its output.
| How can I make cron run a job right now (for testing/debugging), without changing the schedule? |
1,341,916,713,000 |
I have a script that works when I run it from the command line, but when I schedule it with cron I get errors that it cannot find files or commands. My question is twofold:
When I schedule a cron job using crontab -e, does it use my user ID as the basis for its permissions? Or does it use a cron user ID of some sort and its related permissions?
When a cron job is launched, what is the working directory? Is it the directory where I specify the script to run, or a different directory?
Here is my cron job:
15 7 * * * /home/xxxx/Documents/Scripts/email_ip_script.sh
Here is the actual script:
vIP_ADDR="`curl automation.whatismyip.com/n09230945.asp`"
echo "$vIP_ADDR"
sed "s/IPADDR/$vIP_ADDR/g" template.txt > emailmsg.txt
ssmtp [email protected] < emailmsg.txt
Here are the errors I get when I view the mail message produced by cron:
sed: can't read template.txt: No such file or directory
/home/xxxx/Documents/Scripts/email_ip_script.sh: line 15: ssmtp: command not found
It cannot find the template.txt but it resides in the same directory as the script. It also cannot run ssmtp, but I can as my user. What am I missing to get this to work properly?
|
Add cd /home/xxxx/Documents/Scripts/ if you want your job to run in that directory. There's no reason why cron would change to that particular directory. Cron runs your commands in your home directory.
As for ssmtp, it might not be in your default PATH. Cron's default path is implementation-dependent, so check your man page, but in all likelihood ssmtp is in /usr/sbin which is not in your default PATH, only root's.
PATH=/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin
15 7 * * * cd /home/xxxx/Documents/Scripts && ./email_ip_script.sh
| What is the 'working directory' when cron executes a job? |
1,341,916,713,000 |
My sysadmin has set up a bunch of cron jobs on my machine. I'd like to know exactly what is scheduled for what time. How can I get that list?
|
Depending on how your linux system is set up, you can look in:
/var/spool/cron/* (user crontabs)
/etc/crontab (system-wide crontab)
also, many distros have:
/etc/cron.d/*
These configurations have the same syntax as /etc/crontab
/etc/cron.hourly, /etc/cron.daily, /etc/cron.weekly, /etc/cron.monthly
These are simply directories that contain executables that are executed hourly, daily, weekly or monthly, per their directory name.
On top of that, you can have at jobs (check /var/spool/at/*), anacron (/etc/anacrontab and /var/spool/anacron/*) and probably others I'm forgetting.
| How can get a list of all scheduled cron jobs on my machine? |
1,341,916,713,000 |
I want to create a log file for a cron script that has the current hour in the log file name. This is the command I tried to use:
0 * * * * echo hello >> ~/cron-logs/hourly/test`date "+%d"`.log
Unfortunately I get this message when that runs:
/bin/sh: -c: line 0: unexpected EOF while looking for matching ``'
/bin/sh: -c: line 1: syntax error: unexpected end of file
I have tried escaping the date part in various ways, but without much luck. Is it possible to make this happen in-line in a crontab file or do I need to create a shell script to do this?
|
Short answer:
Escape the % as \%:
0 * * * * echo hello >> ~/cron-logs/hourly/"test$(date +\%d).log"
This also uses $(...) instead of the deprecated `...` syntax for command substitution and quotes the expansion of said command substitution.
Long answer:
The error message suggests that the shell which executes your command doesn't see the second backtick character:
/bin/sh: -c: line 0: unexpected EOF while looking for matching '`'
This is also confirmed by the second error message you received when you tried one of the other answers:
/bin/sh: -c: line 0: unexpected EOF while looking for matching ')'
The crontab manpage confirms that the command is read only up to the first unescaped % sign:
The "sixth" field (the rest of the line) specifies the command to
be run. The entire command portion of the line, up to a newline or
% character, will be executed by /bin/sh or by the shell specified in
the SHELL variable of the cronfile. Percent-signs (%) in the
command, unless escaped with backslash (\), will be changed into
newline characters, and all data after the first % will be sent to
the command as standard input.
| How can I execute `date` inside of a crontab job? |
1,341,916,713,000 |
I have a backup script which I need to run at a particular time of a day so I am using cron for this task and from within cron am also trying to redirect the output of backup script to a logfile.
crontab -e
*/1 * * * * /home/ranveer/backup.sh &>> /home/ranveer/backup.log
In the above cron entry I am redirecting both stderr and stdout to a log file.
The above cron job executes fine according to syslog and it performs the task mentioned in the backup.sh file but it doesn't write anything to the log file.
/var/log/syslog
Oct 19 20:26:01 ranveer CRON[15214]: (ranveer) CMD (/home/ranveer/backup.sh &>> /home/ranveer/backup.log)
When I run the script from cli it works as required and output is written to a log file
ranveer@ranveer:~$ ./backup.sh &>> backup.log
ranveer@ranveer:~$ cat backup.log
Fri Oct 19 20:28:01 IST 2012
successfully copied testdir
test.txt successfully copied
-------------------------------------------------------------------------------------
ranveer@ranveer:~$
So, why the output of file is not getting redirected to the file from within cron.
|
I solved the problem. There are two ways:
M1
Change the redirection from &>> to 2>&1. So now crontab -e looks like
*/1 * * * * /home/ranveer/vimbackup.sh >> /home/ranveer/vimbackup.log 2>&1
I believe the above works because by default cron is using sh to run the task instead of bash so &>> is not supported by sh.
M2
Change the default shell by adding SHELL=/bin/bash in the crontab -e file.
| How to redirect output to a file from within cron? |
1,341,916,713,000 |
I am reading an article about crontab
There is something about disabling automatically sending emails.
Disable Email By default cron jobs sends an email to the user account executing the cronjob. If this is not needed put the following
command At the end of the cron job line.
>/dev/null 2>&1
What is the detailed meaning for 2 > & and 1? Why putting this to the end of a crontab file would turn off the email-sending thing?
|
> is for redirect
/dev/null is a black hole where any data sent, will be discarded
2 is the file descriptor for Standard Error
> is for redirect
& is the symbol for file descriptor (without it, the following 1 would be considered a filename)
1 is the file descriptor for Standard Out
Therefore >/dev/null 2>&1 redirects the output of your program to /dev/null. Include both the Standard Error and Standard Out.
Much more information is available at The Linux Documentation Project's I/O Redirection page.
cron will only email you if there is some output from you job. With everything redirected to null, there is no output and hence cron will not email you.
| What does '>/dev/null 2>&1' mean in this article of crontab basics? [duplicate] |
1,341,916,713,000 |
How can I run a cron command with existing environmental variables?
If I am at a shell prompt I can type echo $ORACLE_HOME and get a path. This is one of my environmental variables that gets set in my ~/.profile. However, it seems that ~/.profile does not get loaded fron cron scripts and so my scripts fail because the $ORACLE_HOME variable is not set.
In this question the author mentions creating a ~/.cronfile profile which sets up variables for cron, and then he does a workaround to load all his cron commands into scripts he keeps in his ~/Cron directory. A file like ~/.cronfile sounds like a good idea, but the rest of the answer seems a little cumbersome and I was hoping someone could tell me an easier way to get the same result.
I suppose at the start of my scripts I could add something like source ~/.profile but that seems like it could be redundant.
So how can I get make my cron scripts load the variables from my interactive-shell profile?
|
In the crontab, before you command, add . $HOME/.profile. For example:
0 5 * * * . $HOME/.profile; /path/to/command/to/run
Cron knows nothing about your shell; it is started by the system, so it has a minimal environment. If you want anything, you need to have that brought in yourself.
| How can I run a cron command with existing environmental variables? |
1,341,916,713,000 |
Day-of-week: Allowed range 0 – 7. Sunday is either 0 or 7.
I found this after Googling, my question is why should both values (0,7) correspond to Sunday?
|
This is a matter of portability. In early Unices, some versions of cron accepted 0 as Sunday, and some accepted 7 as Sunday -- this format is an attempt to be portable with both. From man 5 crontab in vixie-cron (emphasis my own):
When specifying day of week, both day 0 and day 7 will be considered
Sunday. BSD and AT&T seem to disagree about this.
| Day of week {0-7} in crontab has 8 options, but we have only 7 days in a week |
1,341,916,713,000 |
Is it possible to make commands in crontab run with bash instead of sh? I know you can pass commands to bash with -c, but that's annoying and I never use sh anyway.
|
You should be able to set the environment variable prior to the cron job running:
SHELL=/bin/bash
5 0 * * * $HOME/bin/daily.job >> $HOME/tmp/out 2>&1
| How to change cron shell (sh to bash)? |
1,341,916,713,000 |
I entered crontab -r instead of crontab -e and all my cron jobs have been removed.
What is the best way (or is there one) to recover those jobs?
|
crontab -r removes the only file containing the cron jobs.
So if you did not make a backup, your only recovery options are:
On RedHat/CentOS, if your jobs have been triggered before, you can find the cron log in /var/log/cron. The file will help you rewrite the jobs again.
Another option is to recover the file using a file recovery tool. This is less likely to be successful though, since the system partition is usually a busy one and corresponding sectors probably have already been overwritten.
On Ubuntu/Debian, if your task has run before, try grep CRON /var/log/syslog
| Recover cron jobs accidently removed with crontab -r |
1,341,916,713,000 |
I have a script being run automatically that I can't find in the crontab for the expected users, so I'd like to search all users' crontabs for it.
Essentially I want to run a crontab -l for all users.
|
Well depends on the script but easily you can find your crontab as root with
crontab -l -u <user>
Or you can find crontab from spool where is located file for all users
cat /var/spool/cron/crontabs/<user>
To show all users' crontabs with the username printed at the beginning of each line:
cd /var/spool/cron/crontabs/ && grep . *
| As root, how can I list the crontabs for all users? |
1,341,916,713,000 |
as many (most?) others, I edit my crontab via crontab -e, where I keep all routine operations such as incremental backup, ntpdate, various rsync operations, as well as making my desktop background christmas themed once a year. From what I've understood, on a fresh install or new user, this also automatically creates the file if it doesn't exist. However, I want to copy this file to another user, so where is the actual file that I'm editing?
If this varies between distros, I'm using Centos5 and Mint 17
|
The location of cron files for individual users is /var/spool/cron/crontabs/.
From man crontab:
Each user can have their own crontab, and though these are files in /var/spool/cron/crontabs, they are not intended to be edited directly.
| Location of the crontab file |
1,341,916,713,000 |
On my Ubuntu-Desktop and on my debian-server I have a script which needs to be executed each minute (a script that calls the minute-tic of my space online browsergame).
The problem is that on debian derivates cron is logging to /var/log/syslog each time it executes. I end up seeing repeated the message it was executed over and over in /var/log/syslog:
Nov 11 16:50:01 eclabs /USR/SBIN/CRON[31636]: (root) CMD (/usr/bin/w3m -no-cookie http://www.spacetrace.org/secret_script.php > /dev/null 2>&1)
I know that in order to suppress the output of a program I can redirect it to /dev/null, for example to hide all error and warning messages from a program I can create a line in crontab like this
* * * * * root /usr/local/sbin/mycommand.sh > /dev/null
But I would like to run a cronjob and be sure that all generated output or errors are piped to NULL, so it doesn't generate any messages in syslog and doesn't generate any emails
EDIT:
there is a solution to redirect the cron-logs into a separate log like proposed here by changing /etc/syslog.conf
But the drawback is, that then ALL output of all cronjobs is redirected.
Can I somehow only redirect a single cronjob to a separate log file? Preferably configurable inside the cron.hourly file itself.
|
Make the line this:
* * * * * root /usr/local/sbin/mycommand.sh > /dev/null 2>&1
This will capture both STDOUT (1) and STDERR (2) and send them to /dev/null.
MAILTO
You can also disable the email by setting and then resetting the MAILTO="" which will disable the sending of any emails.
Example
MAILTO=""
* * * * * root /usr/local/sbin/mycommand.sh > /dev/null 2>&1
MAILTO="[email protected]"
* * * * * root /usr/local/sbin/myothercommand.sh
Additional messaging
Often times you'll get the following types of messages in /var/log/syslog:
Nov 11 08:17:01 manny CRON[28381]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
These are simply notifications via cron that a directory of cronjobs was executed. This message has nothing to do directly with these jobs, instead it's coming from the crond daemon directly. There isn't really anything you can do about these, and I would encourage you to not disable these, since they're likely the only window you have into the goings on of crond via the logs.
If they're very annoying to you, you can always direct them to an alternative log file to get them out of your /var/log/syslog file, through the /etc/syslog.conf configuration file for syslog.
| How do I completely silence a cronjob to /dev/null/? |
1,341,916,713,000 |
Once upon a time,
DISPLAY=:0.0 totem /path/to/movie.avi
after ssh 'ing into my desktop from my laptop would cause totem to play movie.avi on my desktop.
Now it gives the error:
No protocol specified
Cannot open display:
I reinstalled Debian squeeze when it went stable on both computers, and I guess I broke the config.
I've googled on this, and cannot for the life of me figure out what I'm supposed to be doing.
(VLC has an HTTP interface that works, but it isn't as convenient as ssh.)
The same problem arises when I try to run this from a cron job.
|
(Adapted from Linux: wmctrl cannot open display when session initiated via ssh+screen)
DISPLAY and AUTHORITY
An X program needs two pieces of information in order to connect to an X display.
It needs the address of the display, which is typically :0 when you're logged in locally or :10, :11, etc. when you're logged in remotely (but the number can change depending on how many X connections are active). The address of the display is normally indicated in the DISPLAY environment variable.
It needs the password for the display. X display passwords are called magic cookies. Magic cookies are not specified directly: they are always stored in X authority files, which are a collection of records of the form “display :42 has cookie 123456”. The X authority file is normally indicated in the XAUTHORITY environment variable. If $XAUTHORITY is not set, programs use ~/.Xauthority.
You're trying to act on the windows that are displayed on your desktop. If you're the only person using your desktop machine, it's very likely that the display name is :0. Finding the location of the X authority file is harder, because with gdm as set up under Debian squeeze or Ubuntu 10.04, it's in a file with a randomly generated name. (You had no problem before because earlier versions of gdm used the default setting, i.e. cookies stored in ~/.Xauthority.)
Getting the values of the variables
Here are a few ways to obtain the values of DISPLAY and XAUTHORITY:
You can systematically start a screen session from your desktop, perhaps automatically in your login scripts (from ~/.profile; but do it only if logging in under X: test if DISPLAY is set to a value beginning with : (that should cover all the cases you're likely to encounter)). In ~/.profile:
case $DISPLAY in
:*) screen -S local -d -m;;
esac
Then, in the ssh session:
screen -d -r local
You could also save the values of DISPLAY and XAUTHORITY in a file and recall the values. In ~/.profile:
case $DISPLAY in
:*) export | grep -E '(^| )(DISPLAY|XAUTHORITY)=' >~/.local-display-setup.sh;;
esac
In the ssh session:
. ~/.local-display-setup.sh
screen
You could detect the values of DISPLAY and XAUTHORITY from a running process. This is harder to automate. You have to figure out the PID of a process that's connected to the display you want to work on, then get the environment variables from /proc/$pid/environ (eval export $(</proc/$pid/environ tr \\0 \\n | grep -E '^(DISPLAY|XAUTHORITY)=')¹).
Copying the cookies
Another approach (following a suggestion by Arrowmaster) is to not try to obtain the value of $XAUTHORITY in the ssh session, but instead to make the X session copy its cookies into ~/.Xauthority. Since the cookies are generated each time you log in, it's not a problem if you keep stale values in ~/.Xauthority.
There can be a security issue if your home directory is accessible over NFS or other network file system that allows remote administrators to view its contents. They'd still need to connect to your machine somehow, unless you've enabled X TCP connections (Debian has them off by default). So for most people, this either does not apply (no NFS) or is not a problem (no X TCP connections).
To copy cookies when you log into your desktop X session, add the following lines to ~/.xprofile or ~/.profile (or some other script that is read when you log in):
case $DISPLAY:$XAUTHORITY in
:*:?*)
# DISPLAY is set and points to a local display, and XAUTHORITY is
# set, so merge the contents of `$XAUTHORITY` into ~/.Xauthority.
XAUTHORITY=~/.Xauthority xauth merge "$XAUTHORITY";;
esac
¹ In principle this lacks proper quoting, but in this specific instance $DISPLAY and $XAUTHORITY won't contain any shell metacharacter.
| Open a window on a remote X display (why "Cannot open display")? |
1,341,916,713,000 |
I need my script to be executed a minute after each reboot. When I apply @reboot in my crontab it is too early for my script - I want the script to be executed after all other tasks that are routinely run on reboot. How might I run the script sometime after reboot?
|
Is the script only ever intended to run one minute after boot up, or can it be used at other times, too? In the former case, you can add sleep 60 to the beginning of your script, or in the latter case, add it to the crontab file:
@reboot sleep 60 && my_script.sh
As has been pointed out by sr_, though, perhaps you are tackling this in the wrong way, and a proper init.d or rc.d script would be a more robust solution.
| How do I start a Cron job 1 min after @reboot? |
1,341,916,713,000 |
I have defined "SHELL" variable in /etc/crontab file:
[martin@martin ~]$ grep SHELL /etc/crontab
SHELL=/usr/local/bin/bash
[martin@martin ~]$ file /usr/local/bin/bash
/usr/local/bin/bash: ELF 32-bit LSB executable, Intel 80386, version 1 (FreeBSD), dynamically linked (uses shared libs), for FreeBSD 8.0 (800107), stripped
[martin@martin ~]$
In addition, all my scripts in /etc/crontab file are started under user "martin". However /home/martin/.bash_profile(for login shell) and /home/martin/.bashrc(for non-logging shell) contain some variables which are ignored in case of cron job, but are used in case I log into machine over SSH or open new bash session. Why cron ignores those variables? Isn't cron simply executing "/usr/local/bin/bash my-script.sh" with permissions for user "martin"?
|
You can source the file you want at the top of the script or beginning of the job for the user that is executing the job. The "source" command is a built-in. You'd do the same thing if you made edits to those files to load the changes.
* * * * * source /home/user/.bash_profile; <command>
or
#!/bin/bash
source /home/user/.bash_profile
<commands>
| cron ignores variables defined in ".bashrc" and ".bash_profile" |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.