Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
fuss:linux [2020/07/08 04:19] – [Scraping a Site Automatically using SystemD] officefuss:linux [2025/05/05 06:52] (current) – [Rescan Hotplugged Drives] office
Line 1: Line 1:
 +
 ====== Distribution Timeline ====== ====== Distribution Timeline ======
  
Line 826: Line 827:
 </code> </code>
  
 +The SystemD equivalent is to add:
 +<code>
 +CapabilityBoundingSet=CAP_NET_BIND_SERVICE
 +AmbientCapabilities=CAP_NET_BIND_SERVICE
 +</code>
 +
 +to the daemon service file.
 ====== Mount Apple Images ====== ====== Mount Apple Images ======
  
Line 1075: Line 1083:
 Add to the command line in ''/etc/default/grub'', the kernel parameters: Add to the command line in ''/etc/default/grub'', the kernel parameters:
 <code> <code>
-nopti noibrs noibpb nospectre_v2+nopti kpti=0 noibrs noibpb l1tf=off mds=off nospectre_v1 nospectre_v2 spectre_v2_user=off spec_store_bypass_disable=off nospec_store_bypass_disable ssbd=force-off no_stf_barrier tsx_async_abort=off nx_huge_pages=off kvm.nx_huge_pages=off kvm-intel.vmentry_l1d_flush=never mitigations=off
 </code> </code>
  
Line 1196: Line 1204:
  
 Finally access the original underlying content via the path ''/mnt/root/mnt/usb2''. Finally access the original underlying content via the path ''/mnt/root/mnt/usb2''.
 +
 +====== Self-Delete Shell Script ======
 +
 +<code>
 +rm -rf -- "$@"
 +</code>
 +
 +====== Resize Last Partition and Filesystem in Image File ======
 +
 +Assuming that an image file is available and named, for example, ''raspios.img'' then the following procedure will extend the last partition and the filesystem by a given size.
 +
 +First, extend the image file ''raspios.img'' itself with zeroes:
 +<code bash>
 +dd if=/dev/zero bs=1M count=500 >> raspios.img
 +</code>
 +where:
 +  * ''1M'' is the block size,
 +  * ''500'' is the amount of blocks to copy over
 +
 +In this case, the image file ''raspios.img'' is extended by $500MiB$. Alternatively, ''qemu-img'' can be used to extend the image file:
 +<code bash>
 +qemu-img resize raspios.img +500M
 +</code>
 +
 +The next step is to run ''parted'' and extend the last partition inside the image file. Open the image file with ''parted'':
 +<code bash>
 +parted raspios.img
 +</code>
 +
 +and then resize the partition, for example:
 +<code bash>
 +(parted) resizepart 2 100%
 +</code>
 +where:
 +  * ''2'' is the partition number
 +
 +The ''parted'' command will resize the second partition to fill 100% of the available space (in this example, it will extend the second partition by $500MiB$).
 +
 +The final step is to enlarge the filesystem within the second partition that has just been extended by $500MiB$. ''kpartx'' will create mapped devices for each of the partitions contained within the ''raspios.img'' image file:
 +<code bash>
 +kpartx -avs raspios.img
 +</code>
 +
 +First, the existing filesystem has to be checked:
 +<code bash>
 +e2fsck -f /dev/mapper/loop0p2
 +</code>
 +where:
 +  * ''/dev/mapper/loop0p2'' is the last partition reported by ''kpartx''
 +
 +and then finally the filesystem is extended to its maximum size:
 +<code bash>
 +resize2fs /dev/mapper/loop0p2
 +</code>
 +
 +====== Delete Files Older than X Days ======
 +
 +<code bash>
 +find /path -mtime +N -delete
 +</code>
 +where:
 +  * ''N'' is the number of days
 +
 +====== Elusive Errors from Crontab ======
 +
 +Sometimes the error:
 +<code bash>
 +/bin/sh: 1: root: not found
 +</code>
 +might be reported by cron.
 +
 +The reason might be that an user ran ''crontab /etc/crontab'' in which case separate crontabs would have been created at ''/var/spool/cron/crontabs/''. To remedy the situation, simply delete ''/var/spool/cron/crontabs/'' and reload the cron daemon.
 +
 +====== Clear Framebuffer Device ======
 +
 +The following command will clear a ''1024x768'' resolution Linux framebuffer:
 +<code bash>
 +dd if=/dev/zero count=$((1024 * 768)) bs=1024 > /dev/fb0
 +</code>
 +
 +====== Adding Mount Point Dependencies to SystemD Service Files ======
 +
 +To get a list of filesystems that are configured (ie: via ''/etc/fstab''), issue:
 +<code bash>
 +systemctl list-units | grep '/path/to/mount' | awk '{ print $1 }'
 +</code>
 +
 +The command will return a list of mount units all ending in ''.mount''.
 +
 +Edit the SystemD service file in ''/etc/systemd/system/'' and add:
 +<code>
 +After=... FS.MOUNT
 +Requires=... FS.MOUNT
 +</code>
 +where:
 +  * ''FS.MOUNT'' is the mount unit retrieved with the previous command
 +
 +====== Detaching a Crytpsetup Header from an Existing Encrypted Disk ======
 +
 +Creating an encrypted container with a detached header adds some plausible deniability since the partition or drive signature will not be observable to someone obtaining the disk drive. 
 +
 +An encrypted volume with a detached header can be created using the ''cryptsetup'' utility on Linux but the question is whether the header can be "detached" at a time later than the creation time. Fortunately, the encrypted drive header is very straightforward and compares easily with any other filesystem header that resides at the start of the disk such that "detaching" the header involves dumping the header to a file and deleting the header from the disk drive.
 +
 +Given an encrypted disk drive recognized as ''/dev/sdb'' on Linux, the first operation would be to dump the header:
 +<code bash>
 +cryptsetup luksHeaderBackup /dev/sdb --header-backup-file /opt/sdb.header
 +</code>
 +
 +Next, the ''/dev/sdb'' drive has to be inspected in order to find out how large the LUKS header is:
 +<code bash>
 +cryptsetup luksDump /dev/sdb
 +</code>
 +
 +which will print out something similar to the following:
 +<code>
 +LUKS header information
 +Version:        2
 +Epoch:          3
 +Metadata area:  12475 [bytes]
 +Keyslots area:  18312184 [bytes]
 +UUID:           26e2b280-de17-6345-f3ac-2ef43682faa2
 +Label:          (no label)
 +Subsystem:      (no subsystem)
 +Flags:          (no flags)
 +
 +Data segments:
 +  0: crypt
 +        offset: 22220875 [bytes]
 +        length: (whole device)
 +        cipher: aes-xts-plain64
 +        sector: 512 [bytes]
 +
 +Keyslots:
 +  0: luks2
 +        Key:        256 bits
 +...
 +</code>
 +
 +The important part here is the offset:
 +<code>
 +...
 +Data segments:
 +  0: crypt
 +        offset: 22220875 [bytes]
 +        length: (whole device)
 +...
 +</code>
 +
 +''22220875'' is the number of bytes from the start of the disk representing the length of the header.
 +
 +The next step is thus to delete the header:
 +<code>
 +dd if=/dev/zero of=/dev/sdb bs=22220875 count=1
 +</code>
 +where:
 +  * ''22220875'' is the length of the header
 +
 +Finally, the disk can be opened using ''cryptsetup'' and by providing the header file created previously at ''/opt/sdb.header'':
 +
 +<code bash>
 +cryptsetup luksOpen --header /opt/sdb.header /dev/sdb mydrive
 +</code>
 +
 +The command should now open the drive with the header detached and placed at ''/opt/sdb.header''.
 +
 +====== Block Transfers over the Network ======
 +
 +When transferring large files over the network the following considerations must be observed:
 +  * encryption - whether encryption is necessary or not; encryption will slow down a transfer particularly if there is no hardware acceleration available,
 +  * compression - depending on the files being transferred, compression can reduce the amount of data being transferred; nevertheless, in case compression uses a single CPU, the CPU will become the bottleneck during transfer
 +
 +For instances, with both encryption and compression, the following commands executed on the client and the server will transfer ''/dev/device'' over the network.
 +
 +On the server (receiver), issue:
 +<code bash>
 +nc -l -p 6500 | \                                        # listens on port 6500
 +    openssl aes-256-cbc -d -salt -pass pass:mysecret | \ # decrypts with password mysecret
 +    pigz -d | \                                          # decompresses
 +    dd bs=16M of=/dev/device                             # writes the stream to /dev/device
 +</code>
 +where:
 +  * ''aes-256-cbc'' is the symmetric cipher to use (execute ''cryptsetup benchmark'' for example to get a speed estimate of the available ciphers and hopefully find a hardware accelerated cipher),
 +  * ''pigz'' is a parallel gzip tool that will make use of all CPUs, avoiding thereby for the (de)compression to become a CPU bottleneck,
 +
 +on the client (sender), issue:
 +<code bash>
 +pv -b -e -r -t -p /dev/device |                       # reads from /dev/device (with stats)
 +    pigz -1 | \                                       # compresses the stream
 +    openssl aes-256-cbc -salt -pass pass:mysecret | \ # encrypts with password mysecret
 +    nc server.lan 6500 -q 1                           # connects to server.lan on port 6500
 +</code>
 +where:
 +  * ''-b -e -r -t -p'' are all flags that turn on, in turn:
 +    * ''-b'' a byte counter that will count the number of bytes read,
 +    * ''-e'' turns on an estimated ETA for reading the entire device,
 +    * ''-r'' the rate counter and will display the current rate of data transfer,
 +    * ''-t'' a timer that will display the total elapsed time spent reading,
 +    * ''-p'' will enable a progress bar,
 +  * ''-q 1'' indicates that ''nc'' should terminate after one second in case ''EOF'' has been reached while reading ''/dev/device''
 +
 +Alternatively, if no compression or encryption is desired, [[/libvirt/automatic_virtual_machine_cloning_using_network_block_devices#server|the Network Block Device (NBD)]] might be more convenient.
 +
 +====== Determine if System is Big- or Little Endian ======
 +
 +<code bash>
 +echo -n I | hexdump -o | awk '{ print substr($2,6,1); exit}'
 +</code>
 +will display:
 +  * ''0'' on a big endian system,
 +  * ''1'' on a little endian system
 +
 +====== Network Emulation and Testing using Traffic Control ======
 +
 +The traffic shaper (''tc'') built into the Linux kernel can be used in order to perform network testing - in particular, granting the ability to simulate packet delay, packet loss, packet duplication, packet corruption, packet reordering as well as rate control.
 +
 +A simple setup would look like the following where a Linux gateway will be NATing a client machine ''A'' to the Internet and, at the same time, the Linux gatway will be using traffic shaping to delay the packets sent by the client machine ''A''.
 +
 +<code>
 +IP: a.a.a.a
 ++---+          eth0 +---------+ eth1
 +| A +-------------->+ Gateway +--------------> Internet
 ++---+               +---------+
 +</code>
 +
 +Using traffic shaping, the following commands can be used to induce a delay for all packets originating from client ''A'' and then forwarded to the Internet:
 +
 +<code bash>
 +tc qdisc del dev eth1 root
 +tc qdisc add dev eth1 handle 1: root htb
 +tc class add dev eth1 parent 1: classid 1:15 htb rate 100000mbit
 +tc qdisc add dev eth1 parent 1:15 handle 20: netem delay 4000ms
 +tc filter add dev eth1 parent 1:0 prio 1 protocol ip handle 1 fw flowid 1:15
 +</code>
 +
 +''netem'' will take care of delaying the packets, following this example, by a constant rate of $4000ms$ per packet. Once the classes have been established, a filter is set up to match all packets marked by ''iptables'' and push them through ''qdisc''.
 +
 +''iptables'' can then be used to mark packets and send them through ''tc'':
 +<code bash>
 +iptables -t mangle -A FORWARD -s a.a.a.a -j MARK --set-mark 1
 +</code>
 +where:
 +  * ''1'' is the mark established by the ''tc'' filter command.
 +
 +In other words, when packets arrive from client ''A'' at IP ''a.a.a.a'' on the interface ''eth0'' to the Linux gateway, the packets will be marked with the label ''20''. When the packets need to be sent out to the Internet over interface ''eth1'', all packets originating from ''a.a.a.a'' will be pushed through the traffic shaper, by following the filter marked with handle ''20'', and then through the classifers and delayed with ''netmem'' by $4000ms$ each.
 +
 +The following schematic illustrates the traffic control setup achieved using the commands written above:
 +<code>
 +   root 1: root HTB (qdisc)
 +        |
 +     1:15 HTB (class)
 +        |
 +     20: netem (qdisc)
 +</code>
 +and it can be displayed by issuing the command:
 +<code bash>
 +tc -s qdisc ls dev eth1
 +</code>
 +that would result in the following output:
 +<code>
 +    qdisc htb 1: root refcnt 2 r2q 10 default 0 direct_packets_stat 23 direct_qlen 1000
 +    Sent 3506 bytes 23 pkt (dropped 0, overlimits 0 requeues 0) 
 +    backlog 0b 0p requeues 0
 +   qdisc netem 20: parent 1:15 limit 1000 delay 5s
 +    Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 +    backlog 0b 0p requeues 0   
 +</code>
 +
 +Given this setup, the traffic shaper ''tc'' has to be set up only once and then ''iptables'' marking can be leveraged to selectively mark packets that have to be delayed. Using both ''iptables'' and ''tc'' is somewhat more flexible in terms of separation of concerns. ''iptables'' is used to perform packet matching and then ''tc'' is used to induce all kinds of effects supported by ''netem'' on the marked packets.
 +
 +At any point in time, a single ''tc'' command can be used to change the induced delay by modifying the queueing discipline. For instance, by issuing the command:
 +<code bash>
 +tc qdisc change dev eth1 parent 1:15 handle 20: netem delay 10ms
 +</code>
 +any new packets will be delayed by $10ms$ instead of $4000ms$. In case there are other packets in the queue, previously having been delayed by $4000ms$, then the packets will not be flushed and they will arrive in due time.
 +
 +As a side-note, there is a certain degree of overlap in features between ''iptables'' and the network emulator ''netem''. For instance, the following ''iptables'' command:
 +<code bash>
 +iptables -t mangle -A FORWARD -m statistic --probability 0.5 -s a.a.a.a -j DROP
 +</code>
 +will achieve the same effect as using the traffic shaper ''tc'' network emulator ''netem'' and induce a $50\%$ loss of packets:
 +<code bash>
 +tc qdisc del dev eth1 root
 +tc qdisc add dev eth1 handle 1: root htb
 +tc class add dev eth1 parent 1: classid 1:15 htb rate 10000mbps
 +tc qdisc add dev eth1 parent 1:15 handle 20: netem loss 0.5%
 +tc filter add dev eth1 parent 1:0 prio 1 protocol ip handle 1 fw flowid 1:15
 +
 +iptables -t mangle -A FORWARD -s a.a.a.a -j MARK --set-mark 1
 +</code>
 +
 +The exact same effect can be achieved just using the traffic shaper ''tc'', the network emulator ''netem'' and without ''iptables'':
 +<code bash>
 +tc qdisc del dev eth1 root
 +tc qdisc add dev eth1 handle 1: root htb
 +tc class add dev eth1 parent 1: classid 1:15 htb rate 10000mbps
 +tc qdisc add dev eth1 parent 1:15 handle 20: netem loss 0.5%
 +tc filter add dev eth1 parent 1:0 protocol ip prio 1 u32 match ip src a.a.a.a/24 flowid 1:15
 +</code>
 +
 +All the variants above will randomly drop half the forwarded packets on average originating from the IP address ''a.a.a.a''.
 +
 +The difference between using ''tc'' and ''iptables'', aside from different features, is that ''tc'' works directly on the queues for each physical interface such that processing with ''tc'' on egress or ingress takes place before the packets can be manipulated with ''iptables''.
 +
 +====== Encoding binary Data to QR-code ======
 +
 +The following command:
 +
 +<code bash>
 +cat Papers.key | qrencode -o - | zbarimg --raw -q -1 -Sbinary - > Papers.key2
 +</code>
 +will:
 +  - pipe the contents of the file ''Papers.key'',
 +  - create a QR code image from the data,
 +  - read the QR code from the image and write it to ''Papers.key2''
 +
 +effectively performing a round-trip by encoding and decoding the binary data contained in ''Papers.key''.
 +
 +Alternatively, since binary data might not be properly handled by various utilities, the binary data can be armored by using an intermediary base64 encoder. In other words, the following command:
 +<code bash>
 +cat Papers.key | base64 | qrencode -o - > Papers.png
 +</code>
 +will:
 +  - pipe the contents of the file ''Papers.key'',
 +  - base64 encode the data,
 +  - generate a QR code image file ''Papers.png''
 +
 +Then, in order to decode, the following command:
 +<code bash>
 +zbarimg --raw -q -1 Papers.png | base64 -d > Papers.key
 +</code>
 +will:
 +  - read the QR code,
 +  - decode the data using base64,
 +  - output the result to the file ''Papers.key''
 +
 +====== Fixing Patch with Different Line Endings ======
 +
 +The general procedure is to make line endings the same for both the patch and the files to be patched. For instance, to normalize the line endings for all the files included in a patch:
 +<code bash>
 +grep '+++' dogview.patch | awk '{ print $2 }' | sed 's/b\///g' | xargs dos2unix
 +</code>
 +
 +followed by normalizing the line endings for ''dogview.patch'':
 +<code bash>
 +dos2unix dogview.patch
 +</code>
 +
 +After which, the patch can be applied:
 +<code bash>
 +patch -p1 < dogview.patch
 +</code>
 +
 +====== Sending Mail from the Linux Command Line using External Mail Servers ======
 +
 +The current options seem to be to use the following programs:
 +  * ''s-nail'' (formerly, ''nail''),
 +  * ''curl'',
 +  * ''ssmtp'' (not covered here because ''ssmtp'' seems to be an MTA and not an MDA such that it is not useful for these examples)
 +
 +As a general pitfall, note that the following error shows up frequently for various online example calls of the commands:
 +<code>
 +could not initiate TLS connection: error:1408F10B:SSL routines:ssl3_get_record:wrong version number
 +</code>
 +when issuing the commands above.
 +
 +More than often, in case that a TLS connection has to be made via ''STARTTLS'', the problem is that the connection has to be first established in plain-text and only after issuing the ''STARTTLS'' command, the TLS protocol is negotiated between the client and the server. What happens is that most examples for various commands such as the ones above will tell the user to specify connection strings for TLS such as:
 +<code>
 +smtps://USERNAME:PASSWORD@MAILSERVER:PORT
 +</code>
 +where ''smtps'' would hint to encyption when, given the ''STARTTLS'' protocol, there should be no encryption when the connection is established. The usual fix is to replace ''smtps'' by ''smtp'' and make sure that the client actually issues ''STARTTLS'' and then proceeds to encryption.
 +
 +===== S-Nail =====
 +
 +Using ''s-nail'' the command options would be the following:
 +<code bash>
 +s-nail -:/ \
 +    -Sv15-compat \
 +    -S ttycharset=utf8 \
 +    -S mta='smtp://USERNAME:PASSWORD@MAILSERVER:PORT' \
 +    -S smtp-use-starttls 
 +    -S smtp-auth=login \
 +    -S from=SENDER \
 +    -S subject=test 
 +    -end-options RECIPIENT
 +</code>
 +where:
 +  * ''USERNAME'' is the username of the account, for instance, for ''outlook.com'', the username is the entire E-Mail of the account; additionally, note that any special characters must be URI encoded,
 +  * ''PASSWORD'' is the URI encoded password for the account,
 +  * ''MAILSERVER'' is the E-Mail server host,
 +  * ''PORT'' is the E-Mail server port,
 +  * ''SENDER'' is the envelope sender (your E-Mail),
 +  * ''RECIPIENT'' is the destination
 +
 +===== cURL =====
 +
 +<code bash>
 +curl \
 +    --ssl-reqd \
 +    --url 'smtp://MAILSERVER:PORT/' \
 +    --mail-from 'SENDER' \
 +    --mail-rcpt 'RECIPIENT' \
 +    --user USERNAME:PASSWORD \
 +    -v 
 +    -T mail.txt 
 +</code>
 +
 +and ''mail.txt'' has the following shape:
 +<code>
 +From: SENDER
 +To: RECIPIENT
 +Subject: SUBJECT
 +
 +BODY
 +
 +</code>
 +where:
 +  * ''USERNAME'' is the username of the account, for instance, for ''outlook.com'', the username is the entire E-Mail of the account
 +  * ''PASSWORD'' is the password for the account,
 +  * ''MAILSERVER'' is the E-Mail server host,
 +  * ''PORT'' is the E-Mail server port,
 +  * ''SENDER'' is the envelope sender (your E-Mail),
 +  * ''RECIPIENT'' is the destination,
 +  * ''SUBJECT'' is the subject for the E-Mail,
 +  * ''BODY'' is the body of the E-Mail
 +
 +Note that it is not necessary to use an additional file such as ''mail.txt'' for the E-Mail and it is possible to pipe the contents of the ''mail.txt'' from the command line by replacing ''-T mail.txt'' by ''-T -'' indicating that the E-Mail will be read from standard input. For example:
 +<code bash>
 +echo "From: SENDER\nTo: RECIPIENT\nSubject: SUBJECT\n\nBODY" | curl \
 +    --ssl-reqd \
 +    --url 'smtp://MAILSERVER:PORT/' \
 +    --mail-from 'SENDER' \
 +    --mail-rcpt 'RECIPIENT' \
 +    --user USERNAME:PASSWORD \
 +    -v 
 +    -T -
 +</code>
 +
 +====== Quickly Wipe Partition Tables with Disk Dumper ======
 +
 +Partition tables can be zapped quickly using ''dd''.
 +
 +===== MBR =====
 +
 +<code bash>
 +dd if=/dev/zero of=/dev/sda bs=512 count=1
 +</code>
 +where:
 +  * ''/dev/sda'' is the drive to wipe the partition table for,
 +  * ''512'' is the amount of bytes to write from the start of the disk,
 +  * ''1'' means writing ''bs'' amount of bytes this number of times
 +
 +The byte count is calculated as $446B$ bootstrap + $64B$ partition table + $2B$ signature = $512B$.
 +
 +===== GPT =====
 +
 +GPT preserves an additional table at the end of the device, such that wiping the partition involves two commands:
 +  * wipe the table at the start of the drive,
 +  * wipe the backup table at the back of the drive
 +
 +The following commands should accomplish that:
 +<code bash>
 +dd if=/dev/zero of=/dev/sda bs=512 count=34
 +dd if=/dev/zero of=/dev/sda bs=512 count=34 seek=$((`blockdev --getsz /dev/sda` - 34))
 +</code>
 +where:
 +  * ''/dev/sda'' is the drive to wipe the partition table for
 +
 +====== Options when Referring to Block Devices by Identifier Fail ======
 +
 +On modern Linux systems, referring to partitions is done via the partition UUID instead of referring to the actual block device. One problem that will show up sooner or later is that in order to be able to generate a partition UUID, a block device must have partitions in the first place. Similarly, one can mount partitions via their disk labels, yet that will fail as well when a disk does not even have a partition table. This case is typical for whole drive encryption with LUKS where no label or partition table is even desirable and not only an oversight.
 +
 +Assuming that the block device ''/dev/sda'' is part of a larger storage framework that, when initialized, does not even set a marker, create a partition table or a partition on the block device , the command:
 +<code bash>
 +blkid
 +</code>
 +will fail to list ''/dev/sda'' with any UUID. Now, assuming that there are several block device in similar situations such as ''/dev/sdb'', ''/dev/sdc'', etc, then when Linux will reboot, there will be no guarantee that the block device files will refer to the same drives.
 +
 +To work around this issue ''udev'' can be leveraged and ]/fuss/udev#creating_specific_rules_for_devices|rules can be written in order to match the hard-drives]] at detect time and then create symlinks to the hard-drives that should be stable over reboots.
 +
 +For instance, issuing:
 +<code bash>
 +udevadm info -q all -n /dev/sda --attribute-walk
 +</code>
 +will output all the attributes of ''/dev/sda'' and a few of those can be selected in order to construct an ''udev'' rule. 
 +
 +For instance, based on the output of the command a file is created at ''/etc/udev/rules.d/10-drives.rules'' with the following contents:
 +<code>
 +SUBSYSTEM=="block", ATTRS{model}=="EZAZ-00SF3B0    ", ATTRS{vendor}=="WDC WD40", SYMLINK+="western"
 +</code>
 +
 +This rule will now match:
 +  * within the ''block'' device subsystem,
 +  * model name ''EZAZ-00SF3B0    '' as reported by the hardware,
 +  * vendor name ''WDC WD40''
 +and once matched will create a symbolic link named ''western'' within the ''/dev/'' filesystem that will point to whatever hardware device file the kernel generated for the drive.
 +
 +Now, it becomes easy to mount the drive using ''fstab'' because the symlink will be stable over reboots, guaranteeing that the ''/dev/western'' link will always point to the correct drive. The line in ''/etc/fstab'' would look similar to the following:
 +<code>
 +/dev/western     /mnt/western    ext4    defaults    0    0
 +</code>
 +where ''/dev/western'' is the source device symbolic link generated by ''udev'' on boot.
 +
 +
 +====== Setting Interface Metric ======
 +
 +One very typical scenario that definitely would need setting interface metric would be the case of a laptop that has both Ethernet and wireless connections with both connections established to the local network. Linux does not automatically sense the fastest network connection such that interface metrics should be established for all network interfaces.
 +
 +Typically, for Debian (or Ubuntu) Linux distributions, ''ifupdown'' is used to manage network interfaces and the ''ifmetric'' package can be installed using:
 +<code bash>
 +apt-get install ifmentric
 +</code>
 +
 +By installing the ''ifmetric'' package, a new ''metric'' option is now available that can be added to configured network interfaces in ''/etc/network/interfaces'' or ''/etc/network/interfaces.d''. For instance, one can set the metric to ''1'' for ''eth0'' (the Ethernet interface) and ''2'' for ''wlan0'' (the wireless interface), by editing the ifupdown interface file:
 +<code>
 +iface eth0 inet manual
 +    metric 1
 +    mtu 9000
 +
 +allow-hotplug wlan0
 +iface wlan0 inet dhcp
 +    metric 2
 +    wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf
 +
 +</code>
 +
 +Now, provided that both ''eth0'' and ''wlan0''  are on the same network, ''eth0'' will be the preferred interface to reach the local network.
 +
 +====== Reordering Partitions ======
 +
 +It might so happen that device numbers end up skwed after adding or removing partitions such that the alphanumeric name (sda1, sdb2, etc) does not correspond to the contiguous partition layout. The partition  indicators corresponding to the device names can be reordered using the ''fdisk'' tool by entering the expert menu ''x'' and then pressing ''f'' to automatically change the names to correspond to the partition layout.
 +
 +====== Multiplexing Video Device ======
 +
 +One problem with Video4Linux is that multiple processes cannot access the same hardware at the same time. This seems to be mightily problematic when it boils down to video devices that have to be read concurrently in order to perform various operations such as either streaming or taking a screenshot where one or the other operations would disrupt the other. 
 +
 +Fortunately, there is a third-party kernel module called ''v4l2loopback'' that, on its own, does nothing but create "virtual" v4l devices to which data can be written and then read by other programs. 
 +
 +In order to use ''v4l2loopback'', on Debian the kernel module can be installed through DKMS, by issuing:
 +<code bash>
 +apt-get install v4l2loopback-dkms v4l2loopback-utils
 +</code>
 +thereby ensuring that the kernel module will be automatically recompiled after a kernel upgrade.
 +
 +First, the module would have to be loaded upon boot, such that the file ''/etc/modules-load.d/v4l2loopback.conf'' has to be created with the following contents:
 +<code>
 +v4l2loopback
 +</code>
 +Creating the ''/etc/modules-load.d/v4l2loopback.conf'' now ensures that the module is loaded on boot, but additionally some parameters can be added to the loading of the kernel module by creating the file at ''/etc/modprobe.d/v4l2loopback.conf'' with the following contents:
 +<code>
 +options v4l2loopback video_nr=50,51 card_label="Microscope 1,Microscope 2"
 +</code>
 +where:
 +  * ''video_nr=50,51'' will create two virtual V4L devices, namely ''/dev/video50'' and respectively ''/dev/video51'',
 +  * ''Microscope 1'' and ''Microscope 2'' are descriptive labels for the devices.
 +
 +Now, the following will be accomplished:
 +
 +<ditaa>
 +                    v
 +                    | camera, microscope, etc
 +                    |
 +             +------+------+
 +             | Video Input |
 +             | /dev/video0 |
 +             +------+------+
 +                    |
 +        +-----------+-----------+
 +        | write                 | write
 +        v                       v
 + +------+-------+       +-------+------+
 + | /dev/video50 |       | /dev/video51 |
 + +------+-------+       +-------+------+
 +        |                       |
 +        v                       v
 +       read                   read
 +
 +</ditaa>
 +
 +That is, textually, a V4L device with its corresponding V4L device name at ''/dev/video0'' will be multiplexed to two virtual V4L devices, ''/dev/video50'' and ''/dev/video51'' respectively in order to allow two separate simultaneous reads from both ''/dev/video50'' and ''/dev/video51'' devices.
 +
 +In order to accomplish the multiplexing, given that ''v4l2loopback'' has already been set up, a simple command line suffices, such as:
 +<code bash>
 +cat /dev/video0 | tee /dev/video50 /dev/video51
 +</code>
 +that will copy ''video0'' to ''video50'' and ''video51''.
 +
 +However, more elegantly and under SystemD, a service file can be used instead along with ''ffmpeg'':
 +<code>
 +[Unit]
 +Description=Microscope Clone
 +After=multi-user.target
 +Before=microscope.service microscope_button.service
 +
 +[Service]
 +ExecStart=/usr/bin/ffmpeg -hide_banner -loglevel quiet -f v4l2 -i /dev/video0 -codec copy -f v4l2 /dev/video50 -codec copy -f v4l2 /dev/video51
 +Restart=always
 +RestartSec=10
 +StandardOutput=syslog
 +StandardError=syslog
 +SyslogIdentifier=microscope
 +User=root
 +Group=root
 +Environment=PATH=/usr/bin/:/usr/local/bin/
 +
 +[Install]
 +WantedBy=microscope.target
 +
 +</code>
 +
 +The service file is placed inside ''/etc/systemd/system'' and uses ''ffmpeg'' to copy ''/dev/video0'' to ''/dev/video50'' and ''/dev/video51''. Interestingly, because ''ffmpeg'' is used, it is also entirely possible to apply video transformations to one or the other multiplexed devices or, say, to seamlessly transform the original stream for both.
 +
 +====== Retrieve External IP Address ======
 +
 +<code bash>
 +dig -a 192.168.1.2 +short myip.opendns.com @resolver1.opendns.com
 +</code>
 +where:
 +  * ''192.168.1.2'' is the local IP address of the interface that connects to the router
 +
 +Alternatively, for one single external interface, the ''-a 192.168.1.2'' parmeter and option can be omitted.
 +
 +====== Substitute for ifenslave ======
 +
 +Modernly, Linux does not use the ''ifenslave'' utility in order to create bonding devices and to add slaves. For instance, the ''ifenslave'' Debian package just contains some helper scripts for integrating with ifupdown. The new way of managing bonding is to use the sysfs filesystem and write to files.
 +
 +Creating a bonding interface can be accomplished by:
 +<code bash>
 +echo "+bond0" >/sys/class/net/bonding_masters
 +</code>
 +where:
 +  * ''bond0'' is the bonding interface to create
 +respectively:
 +<code bash>
 +echo "-bond0" >/sys/class/net/bonding_masters
 +</code>
 +in order to remove a bonding interface.
 +
 +Next, slaves to the bonding interface can be added using (assuming ''bond0'' is the bonding interface):
 +<code bash>
 +echo "+ovn0"> /sys/class/net/bond0/bonding/slaves
 +</code>
 +where:
 +  * ''ovn0'' is the interface to enslave to the bonding interface ''bond0''
 +respectively:
 +<code bash>
 +echo "-ovn0"> /sys/class/net/bond0/bonding/slaves
 +</code>
 +to remove the interface ''ovn0'' as a slave from the bonding interface ''bond0''.
 +
 +====== Ensure Directory is Not Written to If Not Mounted ======
 +
 +One trick to ensure that an underlying mount point directory is not written to if it is not yet mounted is to change its permissions to ''000'' effectively making the underlying directory inaccessible.
 +
 +This is sometimes useful in scenarios where services are brought up on boot later than the filesystem is initialized such that a remote mount via CIFS or NFS might fail and the services being brought up will start writing to the local filesystem instead of the remotely mounted share.
 +
 +====== PAM: permit execution without password ======
 +
 +The following line:
 +<code>
 +auth       sufficient   pam_permit.so
 +</code>
 +can be prepended to any file for commands on daemons within ''/etc/pam.d/'' in order to allow passwordless logins.
 +
 +====== Using the SystemD Out of Memory (OOM) Software Killer ======
 +
 +The typical Linux mitigation  for OOM conditions is the "Out of Memory OOM Killer", a kernel process that monitors processes and kills off a process as a last resort in order to prevent the machine from crashing. Unfortunately, the Linux OOM killer has a bad reputation by either firing too late when the machine is already too hosed to be able to even kill a process, either by "really being the last resort" meaning that the OOM killer will not be too efficient at killing the right process and wait too long while heavy processes are already running (desktop environment, etc).
 +
 +The following packages can be used to add an additional OOM killer to systems within a Docker swarm, all of these being userspace daemons:
 +
 +  * ''systemd-oomd'', ''oomd'' or ''earlyoom''
 +
 +Furthermore, the following sysctl parameter:
 +<code>
 +vm.oom_kill_allocating_task=1
 +</code>
 +when added to the system sysctl, will make Linux kill the process allocating the RAM that would overcommit instead of using heuristics and picking some other process to kill.
 +
 +====== Using the Hangcheck-Timer Module as a Watchdog ======
 +
 +The ''hangcheck-timer'' module is developed by Oracle, included in the Linux kernel and is meant to reboot a machine in case the machine is considered stalled. In order to do this, the module uses two timers and when the sum of delays for both timers exceed a specified threshold, the machine reboots.
 +
 +In order to use the ''hangcheck-timer'' module, edit ''/etc/modules'' and add the module:
 +<code>
 +# /etc/modules: kernel modules to load at boot time.
 +#
 +# This file contains the names of kernel modules that should be loaded
 +# at boot time, one per line. Lines beginning with "#" are ignored.
 +# Parameters can be specified after the module name.
 +
 +hangcheck-timer
 +
 +</code>
 +to the list of modules to load at boot.
 +
 +Then create a file placed at ''/etc/modprobe.d/hangcheck-timer.conf'' in order to include some customizations. For xample, the file could contain the following:
 +<code>
 +options hangcheck-timer hangcheck_tick=1 hangcheck_margin=60 hangcheck_dump_tasks=1 hangcheck_reboot=1
 +</code>
 +where the module options mean:
 +  * ''hangcheck_tick'' - period fo time between system checks (60s default),
 +  * ''hangcheck_margin'' - maximum hang delay before resetting (180s default),
 +  * ''hangcheck_dump_task'' - if nonzero, the machine will dump the system task state when the timer margin is exceeded,
 +  * ''hangcheck_reboot'' - if nonzero, the machine will reboot when the timer margin is exeeded
 +
 +The "timer margin" referred to in the documentation is computed as the sum of ''hangcheck_tick'' and ''hangchck_margin''. In this example, the system would have to be unresponsive for $1 + 60s = 61s$ in order for the ''hangcheck-timer'' module to reboot the machine.
 +
 +====== Trim Journal Log Size ======
 +
 +As rsyslog is being replaced by journald on systems implementing SystemD, some defaults are being set for journald that might not be suitable in case the machine is meant to be used as a thin client. Debian, in particular, seems to set the maximal log size of $4GiB$ which is absurdly large if a thin client is meant to be created.
 +
 +In order to set the log size, edit ''/etc/systemd/journald.conf'' and make the following changes:
 +<code bash>
 +[Journal]
 +Compress=yes
 +# maximal log size
 +SystemMaxUse=1G
 +# ensure at least this much space is available
 +SystemKeepFree=1G
 +
 +</code>
 +where:
 +  * ''Compress'' makes journald compress the log files,
 +  * ''SystemMaxUse'' is the maximal amount of space that will be dedicated to log files,
 +  * ''SystemKeepFree'' is the amount of free space to ensure is free
 +
 +After making the changes, issue the command ''systemctl daemon-reload'' in order to reload and apply the changes.
 +
 +====== List Connected Wireless Clients ======
 +
 +When using the hostapd daemon, the clients can be queried by running the command:
 +<code bash>
 +hostapd_cli -p /var/run/hostapd all_sta
 +</code>
 +but for that to work the ''/var/run/hostapd'' directory has to be enabled in ''hostapd.conf'' because it will create a socket that will be used to query the status.
 +
 +The following changes have to be made:
 +<code>
 +ctrl_interface=/run/hostapd
 +ctrl_interface_group=0
 +
 +</code>
 +where:
 +  * ''/run/hostapd'' is where the directory is placed,
 +  * ''0'' references the root group such that only the root user can access
 +
 +====== Transforming Symlinks Recursively into Real Files ======
 +
 +The following command:
 +<code bash>
 +find /search -type l -name link -exec rsync /path/to/file {} \;
 +</code>
 +where:
 +  * ''-type l'' is the type of file for ''find'' to search meaning a symlink,
 +  * ''-name link'' tells ''find'' to find files named ''link'',
 +  * ''rsync /path/to/file {}'' instructs ''rsync'' to copy the file at ''/path/to/file'' onto the files named ''link'' in the path named ''/search''
 +
 +This works due to the default behavior of ''rsync'' that does not recreate symlinks by copying files but instead transforms the copied file into a hard file.
 +
 +====== Mapping Disk Manager Block Devices to Block Devices ======
 +
 +Sometimes errors are reported by the kernel by referencing drives using disk manager nodes (''dm'') but in order to fix the issue a block device (ie: ''sd...'') would be more useful. The following command will list disk manager nodes to block devices such that they can be fixed:
 +<code bash>
 +dmsetup ls -o blkdevname
 +</code>
 +
 +====== Running the Filesystem Checker Before Mounting Filesystems ======
 +
 +One of the problems on Linux with fielsystems is that if they fail to mount on boot then they are marked as failed and all services that depend on the filesystem will also fail to start. Typically, the resolution is to run the filesystem checker, repair any damage and only then mount the filesystem. Most of the time, any damage can be repaired, however there is very little control or practical decision making left up to the user when the filesystem is repaired with the decisions bouncing between fixing some damage or not. The former applies to the ''ext'' series of filesystems.
 +
 +WIth that being said, the following systemd service file will check the filesystem before mounting it simply by running the filesystem checker ''fsck'' with the ''-y'' parameter that will make the filesystem checker repair any damage automatically without asking or prompting the user. Note that, modernly, for large filesystem, passing ''-y'' and ignoring all prompts while repairing is the "normal" way of performing filesystem checks, due to the storage space being so large that it would be impratical albeit useless to prompt the user whether to accept that some filesystem node be repaired or not be reapaired.
 +
 +Even though a mounting systemd service type exists as ''[Mount]'' the mount section does not have any hook that would allow the user to run a command before or after mounting a filesystem. Instead, the following service file uses the ''oneshot'' systemd service type and then runs the filesystem checker using ''ExecStartPre''.
 +<code>
 +[Unit]
 +Description=mount docker
 +DefaultDependencies=no
 +
 +[Service]
 +Type=oneshot
 +ExecStartPre=/bin/bash -c "/usr/sbin/fsck.ext4 -y /dev/mapper/docker | true"
 +ExecStart=/usr/bin/mount \
 +  -o errors=remount-ro,lazytime,x-systemd.device-timeout=10s \
 +  /dev/mapper/docker \
 +  /mnt/docker
 +
 +[Install]
 +WantedBy=local-fs.target
 +
 +</code>
 +
 +In order to be invoked as part of the systemd boot and be invoked when the local filesystems are mounted, the systemd file uses the ''local-fs.target'' within the ''[Install]'' section. The former will ensure that the service file will run at the same time when the local filesystems are being mounted. Note that prepending ''true'' to the ''fsck'' command will ensure that any non-zero exit status from ''fsck'' will not prevent the filesystem from being mounted.
 +
 +====== Rescan Hotplugged Drives ======
 +
 +Even though hotplug should be working via udev and HAL, sometimes newly inserted drives or removed drives for that matter do not show up and it is necessary to issue a rescan manually. In order to do so, the following command:
 +<code bash>
 +for i in 0 1 2 3 4 5 6 7; do echo "- - -" >/sys/class/scsi_host/host$i/scan; done
 +</code>
 +will issue a scanning request to all SCSI hosts on the system - this includes ATA drives as well with SCSI meaning the all-encompassing high level standard.
 +
 +After the command is issued a command like ''lsblk'' should be able to show any changes.
 +
 +====== Shred and Remove Files ======
 +
 +^ Command Line Aspect ^ Visual Mnemonic Graft ^
 +| ''-f -u -n 1'' | {{fuss:fuss_linux_ideogram_mnemonic_fun.png?nolink&128}} |
 +
 +There are multiple solutions for wiping files before deleting and perhaps the most systematic one is ''bcwipe'' due to the algorithms that it implements. Without installing any new tools, the ''shred'' tool on Linux should do the job but with the only drawback that the command cannot recurse a filesystem tree such that it should be called using a tool like ''find''. The following command:
 +<code bash>
 +find . -name '*.delete' -exec shred -f -u -n 1 '{}' \;
 +</code>
 +will perform one pass of random data across the entire will ''-n 1'', will change any permissions in case the file needs a permission change ''-f'' and will unlink / delete the file after shredding it ''-u''.
 +
  
  

fuss/linux.1594181993.txt.gz · Last modified: 2020/07/08 04:19 by office

Wizardry and Steamworks

© 2025 Wizardry and Steamworks

Access website using Tor Access website using i2p Wizardry and Steamworks PGP Key


For the contact, copyright, license, warranty and privacy terms for the usage of this website please see the contact, license, privacy, copyright.