Differences

This shows you the differences between two versions of the page.

Link to this comparison view

gnulinux [2018/10/23 11:23]
dlicious Added console logging management
gnulinux [2023/01/11 18:56] (current)
dlicious Added Firefox session recovery section
Line 1: Line 1:
 +===== Firefox session recovery =====
 +
 +If Firefox crashes and you lose all (or some of) your tabs/windows, this might get them back.
 +
 +  - Make a backup/tarball of ''~/.mozilla/firefox/<profile>/sessionstore-backups'' (do this immediately, you don't have to stop Firefox first)
 +  - Quit Firefox gracefully, so that it creates the file ''~/.mozilla/firefox/<profile>/sessionstore.jsonlz4''
 +  - Before starting Firefox back up again, find the file ''.../sessionstore-backups/previous.jsonlz4'' in your backup, and overwrite the existing ''sessionstore.jsonlz4'' file
 +  - When you start Firefox back up, hopefully your old session will be restored.  If that doesn't work, try all the other ''*.jsonlz4'' files (quit Firefox and overwrite the main ''sessionstore.jsonlz4'' file with each one, and start Firefox back up to test)
 +
 +===== Manual transactions in yum shell =====
 +
 +This is an alternate method of using YUM to add/remove packages.  You won't usually need to do this, but it can be very helpful in situations like this:
 +
 +  * You want to replace "package1" with "package2"
 +  * As soon as you try to remove "package1", YUM also wants to remove a bunch of other stuff that depends on "package1"
 +  * You know that "package2" also fulfills those dependencies
 +
 +First, create a file called ''yum-transaction.txt'' with the following contents (you can have as many "remove" and "install" lines as necessary, if the situation is more complex):
 +
 +<code>
 +remove package1
 +install package2
 +run
 +</code>
 +
 +Then, run the transaction like so (for testing):
 +
 +<code>
 +yum shell < yum-transaction.txt
 +</code>
 +
 +The command will run, evaluate the dependencies, and then automatically exit without doing anything when it hits the interactive prompt (it won't wait for you to answer, it'll just immediately act as if you'd responded "no").  If the result looks like what you want, run it like this to have yum automatically agree to the prompt and apply the changes:
 +
 +<code>
 +yum -y shell < yum-transaction.txt
 +</code>
 +
 +===== OpenSSL cheatsheet =====
 +
 +Generate a private key (''private-key.key''), CSR (''certificate-signing-request.csr''), and self-signed certificate (''self-signed-certificate.crt'') in one non-interactive command pipeline.
 +
 +**Note:** The SAN hostnames don't make it into the certificate with this method, although they //are// in the CSR.  I'm not sure why this is, but I guess it's because the second invocation of openssl that generates the (self-signed) certificate doesn't understand the x.509v3 extensions from the CSR.  The CSR //is// good to go to submit to a CA for signing, including the SAN hostnames.
 +
 +<code>
 +openssl req -subj "/C=US/ST=State/L=Locality/O=Organization/CN=www.example.com" -new -addext "subjectAltName=DNS:www.example.com,DNS:example.com,DNS:example.net" -newkey rsa:2048 -nodes -keyout private-key.key | tee certificate-signing-request.csr | openssl x509 -signkey private-key.key -req -days 365 -out self-signed-certificate.crt 
 +</code>
 +
 +Generate a CSR (''matching-certificate-signing-request.crt'') based on an existing certificate (''existing-certificate.crt'') and private key (''existing-private-key.key'') (this allows you to submit this CSR and get a new signed certificate from the CA that still matches your existing private key).
 +
 +<code>
 +openssl x509 -x509toreq -in existing-certificate.crt -out matching-certificate-signing-request.csr -signkey existing-private-key.key
 +</code>
 +
 +Compare certificates/CSRs/private keys to see if they match each other (run each file through its corresponding command and make sure the outputs match //exactly//).
 +
 +<code>
 +key_modulus=$(  openssl rsa  -modulus -noout -in private-key.key                 )
 +csr_modulus=$(  openssl req  -modulus -noout -in certificate-signing-request.csr )
 +cert_modulus=$( openssl x509 -modulus -noout -in certificate.crt                 )
 +
 +[ "${key_modulus}" = "${csr_modulus}"  ] || echo "Key/CSR mismatch"
 +[ "${key_modulus}" = "${cert_modulus}" ] || echo "Key/certificate mismatch"
 +[ "${csr_modulus}" = "${cert_modulus}" ] || echo "CSR/certificate mismatch"
 +</code>
 +
 +===== jq cheatsheet =====
 +
 +==== Print array keys/values ====
 +
 +Given the following JSON input:
 +
 +<code>
 +{
 +    "key1": "value1",
 +    "key2": "value2"
 +}
 +</code>
 +
 +...use the following jq script:
 +
 +<code>
 +jq '. | to_entries[] | "\(.key) : \(.value)"'
 +</code>
 +
 +...to produce this output:
 +
 +<code>
 +key1 : value1
 +key2 : value2
 +</code>
 +
 +===== tcpdump expression cheatsheet =====
 +
 +==== Show only SYN packets ====
 +
 +First, the common case: just show all SYN packets, in both directions.  The ''tcp &&'' at the beginning is conceptually redundant since you're checking for TCP flags which by definition won't exist in any other protocol, but perhaps telling the filter up front that you're only interested in TCP may allow it to optimize?
 +
 +<code>
 +tcpdump -i tun0 "tcp && tcp[tcpflags] == tcp-syn"
 +</code>
 +
 +Here's essentially the same thing with more filtering, as a quickstart to fiddle with when you need to narrow things down (high-traffic hosts, etc.).  In this case, we're only showing SYN packets that have a) a destination of ''10.9.8.7/32'' and b) a destination port of ''443'':
 +
 +<code>
 +tcpdump -i tun0 "tcp && tcp[tcpflags] == tcp-syn && dst net 10.9.8.7/32 && dst port 443"
 +</code>
 +
 +Finally, let's add to the previous query and also show any SYN packets to/from our local network in addition to the outgoing ones we're already showing:
 +
 +<code>
 +tcpdump -i tun0 "tcp && tcp[tcpflags] == tcp-syn && ( ( dst net 10.9.8.7/32 && dst port 443 ) || net 192.168.1.0/24 )"
 +</code>
 +
 +==== SSH connections on any port ====
 +
 +This expression shows SSH connections regardless of port; is that clever or what?  Credit to [[https://danielmiessler.com/study/tcpdump]] (which is a great tcpdump cheatsheet page).
 +
 +<code>
 +tcpdump -i tun0 "tcp[(tcp[12]>>2):4] = 0x5353482D"
 +</code>
 +
 +==== Using capture files ====
 +
 +You can capture network traffic to a file, then analyze the contents later without having to work with live traffic (this can be really helpful, for example, if you're not 100% sure what you're looking for but you have limited time to capture behaviour).
 +
 +To capture to a file instead of stdout, just add ''-w filename.pcap'' to the command line.  The ''.pcap'' extension isn't required, but it's the standard extension that indicates the file's contents (for example, Ethereal/Wireshark can also read/write these files).  For example, to simply capture all packet metadata to a file:
 +
 +<code>
 +tcpdump -i tun0 -w all-packets-tun0.pcap
 +</code>
 +
 +If you want to capture all the //data// as well, don't forget to add ''-s 0'' (be careful, this can lead to //huge// capture files on busy systems).  You can also add an optional expression so that only certain packets are captured, just like when you're processing live traffic:
 +
 +<code>
 +tcpdump -i tun0 -s 0 -w all-http-traffic-tun0.pcap "dst port 80"
 +</code>
 +
 +Analyzing from a capture file is exactly the same as analyzing live data, except that you add ''-r filename.pcap'' to the command line:
 +
 +<code>
 +## Dump everything in the capture file.
 +tcpdump -r all-http-traffic-tun0.pcap
 +
 +## This will show the same thing, since this expression was already used to
 +## filter the same thing at capture time.
 +tcpdump -r all-http-traffic-tun0.pcap "dst port 80"
 +
 +## This shows traffic to an HTTP service on example.com; it'll still only show
 +## destination port 80 because that's all that's in the capture file in the
 +## first place.
 +tcpdump -r all-http-traffic-tun0.pcap "dst host example.com"
 +</code>
 +
 +===== Vim cheatsheet =====
 +
 +==== Soft text wrap ====
 +
 +Normally Vim just wraps lines by just continuing the character stream onto the next line; with this, you can get it to actually soft-wrap on word boundaries.  This can make things a lot more ergonomic if you're editing text with long lines and you don't want to do a lot of horizontal scrolling or having words cut off in the middle.  This is basically just a TLDR version of this page: [[https://vim.fandom.com/wiki/Word_wrap_without_line_breaks]].
 +
 +<code>
 +:set wrap linebreak textwidth=0 wrapmargin=0
 +</code>
 +
 +===== CentOS multi-version package retention =====
 +
 +This is one way to free up some space in /boot if it's small and you're running out of space.
 +
 +<code>
 +## Change `installonly_limit` to 3
 +sed -ir -e 's/\(installonly_limit=\).*/\13/' /etc/yum.conf
 +
 +## Bring the current package installs in line (YUM will maintain it going forward)
 +package-cleanup --oldkernels --count=3
 +</code>
 +
 +===== Puppet runtime message triage =====
 +
 +Let's say you get a message like this (this is a notice, but you'd break down an error the same way):
 +
 +<code>
 +Notice: /Stage[main]/Splunk::Forwarder/File[/opt/splunkforwarder/etc/system/local/outputs.conf]/ensure: created
 +         ----- ----  ------  --------- ---- --------------------------------------------------  ------  -------
 +            \    \      \        \       \                           \                             \       `> Action's desired end state
 +             \    \      \        \       \                           \                             `> Action property
 +              \    \      \        \       \                           `> Resource name
 +               \    \      \        \       `> Resource type
 +                \    \      \        `> Manifest name
 +                 \    \      `> Module name
 +                  \    `> ???
 +                   `> ???
 +</code>
 +
 +You can break it up into its (slash-delimited) component parts:
 +
 +==== Stage[main] ====
 +
 +Not sure on the details of this part yet.
 +
 +==== Splunk::Forwarder ====
 +
 +This means the module name is ''Splunk'', and the manifest name is ''Forwarder''
 +
 +To find the module, first get the module path with:
 +
 +<code bash>
 +puppet module list 2>/dev/null | awk '/^\// { print $1; }'
 +</code>
 +
 +Then, for each directory in the module path, run this to find a module directory with the correct name.  Convert the module name to lower case.
 +
 +<code bash>
 +find <directory> -maxdepth 1 -type d -name <module> -print
 +</code>
 +
 +Once you've found the module directory, the manifest file should be here:
 +
 +<code>
 +<path>/<module>/manifests/<manifest>.pp
 +</code>
 +
 +For example, this is how it might look in practice, given the above log message:
 +
 +<code bash>
 +# puppet module list 2>/dev/null | awk '/^\// { print $1; }'
 +/etc/puppetlabs/code/modules
 +/opt/puppetlabs/puppet/modules
 +
 +# for directory in /etc/puppetlabs/code/modules /opt/puppetlabs/puppet/modules; do find "${directory}" -maxdepth 1 -type d -name splunk -print; done
 +/etc/puppetlabs/code/modules/splunk
 +
 +# ls -ld /etc/puppetlabs/code/modules/splunk/manifests/forwarder.pp
 +/etc/puppetlabs/code/modules/splunk/manifests/forwarder.pp
 +</code>
 +
 +==== File[/opt/splunkforwarder/etc/system/local/outputs.conf] ====
 +
 +Now that you know which file to look in, you need to find the specific resource in question.  In this case, you're looking for a "file" resource named ''/opt/splunkforwarder/etc/system/local/outputs.conf''.
 +
 +On a side note: file resources are usually named after the file they manage, but that's not //necessarily// true.  What's listed in the Puppet log message is
 +the //name// of the resource, which doesn't technically //have// to be the filename. Just something to be aware of.
 +
 +It should be pretty easy to find by grepping for the resource name (which usually looks like a filename); but in general you're looking for something like:
 +
 +<code>
 +<type> { "<name>":...
 +</code>
 +
 +...or in this case:
 +
 +<code>
 +file { "/opt/splunkforwarder/etc/system/local/outputs.conf":...
 +</code>
 +
 +==== ensure: created ====
 +
 +You should be off and running at this point, but just to clarify; this is an "action" (not sure if that's the right Puppet term) that the resource performs (in this case, it's ensuring that the file exists).  The actions will all be listed in the resource definition.
 +
 +===== SSH file relay =====
 +
 +==== Overview ====
 +
 +This is the problem being solved for:
 +
 +  - You have a file on a remote system named ''srchost''
 +  - You want to copy that file to a second remote system named ''desthost''
 +  - ''srchost'' and ''desthost'' cannot connect directly to each other using SSH
 +  - Your local machine can connect to both systems using SSH
 +
 +This solution allows you to use your local system as a seamless relay.  The advantages of this over just doing two separate rsync/scp transfers are:
 +
 +  - It takes less time because you're not waiting for the download to finish before starting the upload
 +  - The actual transfer speed is faster because the data are never written to disk
 +
 +==== Pre-transfer recon ====
 +
 +First, get the size of the file you're transferring, in bytes, from ''srchost''.  This step isn't technically necessary but if it's a large file you definitely want to have this information, since it'll allow you to have an ETA for the transfer.
 +
 +<code bash>
 +ssh srchost "du --bytes file.gz"
 +</code>
 +
 +==== Doing the transfer ====
 +
 +You can log out of ''srchost'' now if you like (if you're short on screen space or something).  To do the transfer, you'll SSH to ''srchost'' and cat the file, with cat's stdout wired up to pv's stdin, and then pv's stdout wired up to the stdin of another ssh process which is taking that stdin and writing it to a file on ''desthost'':
 +
 +<code bash>
 +ssh srchost "cat file.gz" | pv -s 903242 | ssh desthost "cat - > file.gz"
 +</code>
 +
 +The ''-s <bytes>'' argument passed to pv is the size of the file being transferred (since you're using a pipe, pv can't determine the final size on its own).  If you don't know the size and forgot to get it (with ls, du, etc.) then you can leave that argument off and pv will still display the transfer speed; you just won't get an ETA.  If you don't have pv installed...well, you should really install it for stuff like this but if you really don't have it then you can just leave it out of the pipeline entirely.  The transfer will still work the same way, you just won't get any progress/rate display.  You can always log in to the destination system and monitor the file size there, I guess.
 +
 +Here's a script that should make this pretty painless.  Believe it or not it actually has rudimentary resume capability, since it checks the destination file size to see if something's there already and uses tail to start re-transferring at the right byte.  I still wouldn't trust it any further than I could throw it, but it might save your bacon.  Just re-run the same command line and it'll automatically resume (it's designed to be idempotent).  If you do have to resume, I'd run an md5sum or something on the file on both systems to make sure they match; it worked when I tried it (I killed the same transfer three or four times to test) but I make no binding promises!
 +
 +<file bash ssh-relay-transfer.sh>
 +#!/bin/sh
 +
 +## In order for this script to work, the following must be installed (except
 +## for pv, these should all be in your base systems):
 +##
 +## On the local host       : ssh expr pv
 +## On the source host      : ssh du cut tail
 +## On the destination host : ssh du cut touch cat
 +##
 +## Exit status is 0 on success, 2 if the destination file is the same size
 +## as the source file.  If one of the commands fails, it'll probably exit
 +## with a 1.  The external commands might exit 2 as well; if you have verbose
 +## on, then you'll see a message on stderr from this script if it's exiting 2
 +## because the files were the same size (so if you have an exitstatus of 2 and
 +## no message, you know it came from an external command).
 +
 +## Hostnames of the two servers, as you would pass them to the SSH client
 +source_host="srchost"
 +destination_host="desthost"
 +
 +## These filenames are going straight to the SSH client, so relative paths
 +## are relative to your home directory on each remote system.
 +##
 +## There's no reason the file in question has to be a .gz file, I'm just using
 +## that extension as an example; any file will work the same way.
 +##
 +## You can leave the destination filename empty and the source filename will
 +## be used (including any path)
 +source_filename="testfile.gz"
 +#destination_filename=""
 +
 +## Set this to >0 to have the script print some diagnostics to stderr
 +verbose=1
 +
 +###############################################################################
 +###############################################################################
 +
 +## Retrieve the source file's size
 +source_file_size=$(ssh "${source_host}" "du --bytes ${source_filename} | cut --fields 1")
 +
 +## Use source filename for destination of we don't already have one
 +[ -z "${destination_filename}" ] && destination_filename="${source_filename}"
 +
 +## Retrieve the destination file's size
 +destination_file_size=$(ssh "${destination_host}" "du --bytes ${destination_filename} 2>/dev/null | cut --fields 1")
 +[ -z "${destination_file_size}" ] && destination_file_size=0
 +
 +if [ ${destination_file_size} -eq ${source_file_size} ]; then
 +    ## Files are the same size, we must be done; exit with a unique status
 +    [ ${verbose} -ge 1 ] && echo "File sizes are the same; looks like we're done." >&2
 +   exit 2
 +fi
 +
 +if [ ${destination_file_size} -gt 0 ]; then
 +    byte_offset=$(expr ${destination_file_size} + 1)
 +    transfer_size=$(expr ${source_file_size} - ${destination_file_size})
 +else
 +    byte_offset=0
 +    transfer_size=${source_file_size}
 +fi
 +
 +if [ ${verbose} -ge 1 ]; then
 +    ## Print diagnostics to stderr
 +    echo "Source                : ${source_host}:${source_filename}"                       >&2
 +    echo "Destination           : ${destination_host}:${destination_filename}"             >&2
 +    echo "Source file size      : ${source_file_size}"                                     >&2
 +    echo "Destination file size : ${destination_file_size}"                                >&2
 +    echo "Byte offset           : ${byte_offset}"                                          >&2
 +    echo "Transfer size         : ${transfer_size}"                                        >&2
 +    echo ""                                                                                >&2
 +    echo "###############################################################################" >&2
 +    echo ""                                                                                >&2
 +fi
 +
 +## Release the hounds!
 +ssh "${source_host}" "tail -c +${byte_offset} ${source_filename}" | pv -s ${transfer_size} | ssh "${destination_host}" "touch ${destination_filename}; cat - >> ${destination_filename}"
 +
 +## EOF
 +########
 +</file>
 ===== NTP troubleshooting ===== ===== NTP troubleshooting =====
  
gnulinux.1540311830.txt.gz · Last modified: 2018/10/23 11:23 by dlicious
 
Except where otherwise noted, content on this wiki is licensed under the following license: GNU Free Documentation License 1.3
Recent changes RSS feed Donate Powered by PHP Valid XHTML 1.0 Valid CSS Run by Debian Driven by DokuWiki