If Firefox crashes and you lose all (or some of) your tabs/windows, this might get them back.
~/.mozilla/firefox/<profile>/sessionstore-backups
(do this immediately, you don't have to stop Firefox first)~/.mozilla/firefox/<profile>/sessionstore.jsonlz4
…/sessionstore-backups/previous.jsonlz4
in your backup, and overwrite the existing sessionstore.jsonlz4
file*.jsonlz4
files (quit Firefox and overwrite the main sessionstore.jsonlz4
file with each one, and start Firefox back up to test)This is an alternate method of using YUM to add/remove packages. You won't usually need to do this, but it can be very helpful in situations like this:
First, create a file called yum-transaction.txt
with the following contents (you can have as many “remove” and “install” lines as necessary, if the situation is more complex):
remove package1 install package2 run
Then, run the transaction like so (for testing):
yum shell < yum-transaction.txt
The command will run, evaluate the dependencies, and then automatically exit without doing anything when it hits the interactive prompt (it won't wait for you to answer, it'll just immediately act as if you'd responded “no”). If the result looks like what you want, run it like this to have yum automatically agree to the prompt and apply the changes:
yum -y shell < yum-transaction.txt
Generate a private key (private-key.key
), CSR (certificate-signing-request.csr
), and self-signed certificate (self-signed-certificate.crt
) in one non-interactive command pipeline.
Note: The SAN hostnames don't make it into the certificate with this method, although they are in the CSR. I'm not sure why this is, but I guess it's because the second invocation of openssl that generates the (self-signed) certificate doesn't understand the x.509v3 extensions from the CSR. The CSR is good to go to submit to a CA for signing, including the SAN hostnames.
openssl req -subj "/C=US/ST=State/L=Locality/O=Organization/CN=www.example.com" -new -addext "subjectAltName=DNS:www.example.com,DNS:example.com,DNS:example.net" -newkey rsa:2048 -nodes -keyout private-key.key | tee certificate-signing-request.csr | openssl x509 -signkey private-key.key -req -days 365 -out self-signed-certificate.crt
Generate a CSR (matching-certificate-signing-request.crt
) based on an existing certificate (existing-certificate.crt
) and private key (existing-private-key.key
) (this allows you to submit this CSR and get a new signed certificate from the CA that still matches your existing private key).
openssl x509 -x509toreq -in existing-certificate.crt -out matching-certificate-signing-request.csr -signkey existing-private-key.key
Compare certificates/CSRs/private keys to see if they match each other (run each file through its corresponding command and make sure the outputs match exactly).
key_modulus=$( openssl rsa -modulus -noout -in private-key.key ) csr_modulus=$( openssl req -modulus -noout -in certificate-signing-request.csr ) cert_modulus=$( openssl x509 -modulus -noout -in certificate.crt ) [ "${key_modulus}" = "${csr_modulus}" ] || echo "Key/CSR mismatch" [ "${key_modulus}" = "${cert_modulus}" ] || echo "Key/certificate mismatch" [ "${csr_modulus}" = "${cert_modulus}" ] || echo "CSR/certificate mismatch"
Given the following JSON input:
{ "key1": "value1", "key2": "value2" }
…use the following jq script:
jq '. | to_entries[] | "\(.key) : \(.value)"'
…to produce this output:
key1 : value1 key2 : value2
First, the common case: just show all SYN packets, in both directions. The tcp &&
at the beginning is conceptually redundant since you're checking for TCP flags which by definition won't exist in any other protocol, but perhaps telling the filter up front that you're only interested in TCP may allow it to optimize?
tcpdump -i tun0 "tcp && tcp[tcpflags] == tcp-syn"
Here's essentially the same thing with more filtering, as a quickstart to fiddle with when you need to narrow things down (high-traffic hosts, etc.). In this case, we're only showing SYN packets that have a) a destination of 10.9.8.7/32
and b) a destination port of 443
:
tcpdump -i tun0 "tcp && tcp[tcpflags] == tcp-syn && dst net 10.9.8.7/32 && dst port 443"
Finally, let's add to the previous query and also show any SYN packets to/from our local network in addition to the outgoing ones we're already showing:
tcpdump -i tun0 "tcp && tcp[tcpflags] == tcp-syn && ( ( dst net 10.9.8.7/32 && dst port 443 ) || net 192.168.1.0/24 )"
This expression shows SSH connections regardless of port; is that clever or what? Credit to https://danielmiessler.com/study/tcpdump (which is a great tcpdump cheatsheet page).
tcpdump -i tun0 "tcp[(tcp[12]>>2):4] = 0x5353482D"
You can capture network traffic to a file, then analyze the contents later without having to work with live traffic (this can be really helpful, for example, if you're not 100% sure what you're looking for but you have limited time to capture behaviour).
To capture to a file instead of stdout, just add -w filename.pcap
to the command line. The .pcap
extension isn't required, but it's the standard extension that indicates the file's contents (for example, Ethereal/Wireshark can also read/write these files). For example, to simply capture all packet metadata to a file:
tcpdump -i tun0 -w all-packets-tun0.pcap
If you want to capture all the data as well, don't forget to add -s 0
(be careful, this can lead to huge capture files on busy systems). You can also add an optional expression so that only certain packets are captured, just like when you're processing live traffic:
tcpdump -i tun0 -s 0 -w all-http-traffic-tun0.pcap "dst port 80"
Analyzing from a capture file is exactly the same as analyzing live data, except that you add -r filename.pcap
to the command line:
## Dump everything in the capture file. tcpdump -r all-http-traffic-tun0.pcap ## This will show the same thing, since this expression was already used to ## filter the same thing at capture time. tcpdump -r all-http-traffic-tun0.pcap "dst port 80" ## This shows traffic to an HTTP service on example.com; it'll still only show ## destination port 80 because that's all that's in the capture file in the ## first place. tcpdump -r all-http-traffic-tun0.pcap "dst host example.com"
Normally Vim just wraps lines by just continuing the character stream onto the next line; with this, you can get it to actually soft-wrap on word boundaries. This can make things a lot more ergonomic if you're editing text with long lines and you don't want to do a lot of horizontal scrolling or having words cut off in the middle. This is basically just a TLDR version of this page: https://vim.fandom.com/wiki/Word_wrap_without_line_breaks.
:set wrap linebreak textwidth=0 wrapmargin=0
This is one way to free up some space in /boot if it's small and you're running out of space.
## Change `installonly_limit` to 3 sed -ir -e 's/\(installonly_limit=\).*/\13/' /etc/yum.conf ## Bring the current package installs in line (YUM will maintain it going forward) package-cleanup --oldkernels --count=3
Let's say you get a message like this (this is a notice, but you'd break down an error the same way):
Notice: /Stage[main]/Splunk::Forwarder/File[/opt/splunkforwarder/etc/system/local/outputs.conf]/ensure: created ----- ---- ------ --------- ---- -------------------------------------------------- ------ ------- \ \ \ \ \ \ \ `> Action's desired end state \ \ \ \ \ \ `> Action property \ \ \ \ \ `> Resource name \ \ \ \ `> Resource type \ \ \ `> Manifest name \ \ `> Module name \ `> ??? `> ???
You can break it up into its (slash-delimited) component parts:
Not sure on the details of this part yet.
This means the module name is Splunk
, and the manifest name is Forwarder
To find the module, first get the module path with:
puppet module list 2>/dev/null | awk '/^\// { print $1; }'
Then, for each directory in the module path, run this to find a module directory with the correct name. Convert the module name to lower case.
find <directory> -maxdepth 1 -type d -name <module> -print
Once you've found the module directory, the manifest file should be here:
<path>/<module>/manifests/<manifest>.pp
For example, this is how it might look in practice, given the above log message:
# puppet module list 2>/dev/null | awk '/^\// { print $1; }' /etc/puppetlabs/code/modules /opt/puppetlabs/puppet/modules # for directory in /etc/puppetlabs/code/modules /opt/puppetlabs/puppet/modules; do find "${directory}" -maxdepth 1 -type d -name splunk -print; done /etc/puppetlabs/code/modules/splunk # ls -ld /etc/puppetlabs/code/modules/splunk/manifests/forwarder.pp /etc/puppetlabs/code/modules/splunk/manifests/forwarder.pp
Now that you know which file to look in, you need to find the specific resource in question. In this case, you're looking for a “file” resource named /opt/splunkforwarder/etc/system/local/outputs.conf
.
On a side note: file resources are usually named after the file they manage, but that's not necessarily true. What's listed in the Puppet log message is the name of the resource, which doesn't technically have to be the filename. Just something to be aware of.
It should be pretty easy to find by grepping for the resource name (which usually looks like a filename); but in general you're looking for something like:
<type> { "<name>":...
…or in this case:
file { "/opt/splunkforwarder/etc/system/local/outputs.conf":...
You should be off and running at this point, but just to clarify; this is an “action” (not sure if that's the right Puppet term) that the resource performs (in this case, it's ensuring that the file exists). The actions will all be listed in the resource definition.
This is the problem being solved for:
srchost
desthost
srchost
and desthost
cannot connect directly to each other using SSHThis solution allows you to use your local system as a seamless relay. The advantages of this over just doing two separate rsync/scp transfers are:
First, get the size of the file you're transferring, in bytes, from srchost
. This step isn't technically necessary but if it's a large file you definitely want to have this information, since it'll allow you to have an ETA for the transfer.
ssh srchost "du --bytes file.gz"
You can log out of srchost
now if you like (if you're short on screen space or something). To do the transfer, you'll SSH to srchost
and cat the file, with cat's stdout wired up to pv's stdin, and then pv's stdout wired up to the stdin of another ssh process which is taking that stdin and writing it to a file on desthost
:
ssh srchost "cat file.gz" | pv -s 903242 | ssh desthost "cat - > file.gz"
The -s <bytes>
argument passed to pv is the size of the file being transferred (since you're using a pipe, pv can't determine the final size on its own). If you don't know the size and forgot to get it (with ls, du, etc.) then you can leave that argument off and pv will still display the transfer speed; you just won't get an ETA. If you don't have pv installed…well, you should really install it for stuff like this but if you really don't have it then you can just leave it out of the pipeline entirely. The transfer will still work the same way, you just won't get any progress/rate display. You can always log in to the destination system and monitor the file size there, I guess.
Here's a script that should make this pretty painless. Believe it or not it actually has rudimentary resume capability, since it checks the destination file size to see if something's there already and uses tail to start re-transferring at the right byte. I still wouldn't trust it any further than I could throw it, but it might save your bacon. Just re-run the same command line and it'll automatically resume (it's designed to be idempotent). If you do have to resume, I'd run an md5sum or something on the file on both systems to make sure they match; it worked when I tried it (I killed the same transfer three or four times to test) but I make no binding promises!
#!/bin/sh ## In order for this script to work, the following must be installed (except ## for pv, these should all be in your base systems): ## ## On the local host : ssh expr pv ## On the source host : ssh du cut tail ## On the destination host : ssh du cut touch cat ## ## Exit status is 0 on success, 2 if the destination file is the same size ## as the source file. If one of the commands fails, it'll probably exit ## with a 1. The external commands might exit 2 as well; if you have verbose ## on, then you'll see a message on stderr from this script if it's exiting 2 ## because the files were the same size (so if you have an exitstatus of 2 and ## no message, you know it came from an external command). ## Hostnames of the two servers, as you would pass them to the SSH client source_host="srchost" destination_host="desthost" ## These filenames are going straight to the SSH client, so relative paths ## are relative to your home directory on each remote system. ## ## There's no reason the file in question has to be a .gz file, I'm just using ## that extension as an example; any file will work the same way. ## ## You can leave the destination filename empty and the source filename will ## be used (including any path) source_filename="testfile.gz" #destination_filename="" ## Set this to >0 to have the script print some diagnostics to stderr verbose=1 ############################################################################### ############################################################################### ## Retrieve the source file's size source_file_size=$(ssh "${source_host}" "du --bytes ${source_filename} | cut --fields 1") ## Use source filename for destination of we don't already have one [ -z "${destination_filename}" ] && destination_filename="${source_filename}" ## Retrieve the destination file's size destination_file_size=$(ssh "${destination_host}" "du --bytes ${destination_filename} 2>/dev/null | cut --fields 1") [ -z "${destination_file_size}" ] && destination_file_size=0 if [ ${destination_file_size} -eq ${source_file_size} ]; then ## Files are the same size, we must be done; exit with a unique status [ ${verbose} -ge 1 ] && echo "File sizes are the same; looks like we're done." >&2 exit 2 fi if [ ${destination_file_size} -gt 0 ]; then byte_offset=$(expr ${destination_file_size} + 1) transfer_size=$(expr ${source_file_size} - ${destination_file_size}) else byte_offset=0 transfer_size=${source_file_size} fi if [ ${verbose} -ge 1 ]; then ## Print diagnostics to stderr echo "Source : ${source_host}:${source_filename}" >&2 echo "Destination : ${destination_host}:${destination_filename}" >&2 echo "Source file size : ${source_file_size}" >&2 echo "Destination file size : ${destination_file_size}" >&2 echo "Byte offset : ${byte_offset}" >&2 echo "Transfer size : ${transfer_size}" >&2 echo "" >&2 echo "###############################################################################" >&2 echo "" >&2 fi ## Release the hounds! ssh "${source_host}" "tail -c +${byte_offset} ${source_filename}" | pv -s ${transfer_size} | ssh "${destination_host}" "touch ${destination_filename}; cat - >> ${destination_filename}" ## EOF ########
This column (from the output of ntpdc -p
) shows the success of the most recent synchronization attempts as an octal number. The short answer is that a completely unreachable server will have 0
in this column, and the ideal value in the standard configuration is 377
. You can use bc and/or Python to convert the octal number into its binary equivalent (for this example, what appears in the “reach” column is 175
):
% echo 'obase=2; ibase=8; 175' | bc 1111101 % python -c 'print("{0:08b}".format(0175))' 01111101
There are a couple of things to note about these commands:
0175
). That's standard notation for octal (in the same way that 0x
is the prefix for hexadecimal and 0b
is the prefix for binary) and is how Python knows that the input number is in octal instead of decimal. (You can add the leading zero in the bc command as well if you like, but there it's just a leading zero and bc will strip it…bc knows it's octal because of the ibase=8
statement.)In both cases, the output means that the eight most recent sync attempts went as follows: a failure, followed by five successes, followed by a failure, and finally the most recent attempt which succeeded (you read the digits left-to-right).
The reference documentation for this feature is in RFC 1305 ("Network Time Protocol"), section 3.2.3 ("Peer Variables").
Sources:
Based on https://forums.virtualbox.org/viewtopic.php?f=6&t=38646#p173539
Adapter | Mode | State | Purpose |
---|---|---|---|
1 | NAT | on | Internet Connectivity |
2 | Host-only | on | Private communication between host and guest that still works with no other networking |
3 | Bridged | off | Inbound connectivity from other LAN machines |
4 | Internal | off | Private connectivity between VMs |
Disable image transition effects in the slideshow module:
echo "EffectsEnabled=false" >> ~/.kde/share/config/kslideshow.kssrc
150521 10:41:58 [Note] Plugin 'FEDERATED' is disabled. 150521 10:41:58 InnoDB: The InnoDB memory heap is disabled 150521 10:41:58 InnoDB: Mutexes and rw_locks use GCC atomic builtins 150521 10:41:58 InnoDB: Compressed tables use zlib 1.2.3.4 150521 10:41:58 InnoDB: Initializing buffer pool, size = 80.0M 150521 10:41:58 InnoDB: Completed initialization of buffer pool InnoDB: Error: checksum mismatch in data file ./ibdata1 150521 10:41:58 InnoDB: Could not open or create data files. 150521 10:41:58 InnoDB: If you tried to add new data files, and it failed here, 150521 10:41:58 InnoDB: you should now edit innodb_data_file_path in my.cnf back 150521 10:41:58 InnoDB: to what it was, and remove the new ibdata files InnoDB created 150521 10:41:58 InnoDB: in this failed attempt. InnoDB only wrote those files full of 150521 10:41:58 InnoDB: zeros, but did not yet use them in any way. But be careful: do not 150521 10:41:58 InnoDB: remove old data files which contain your precious data! 150521 10:41:58 [ERROR] Plugin 'InnoDB' init function returned error. 150521 10:41:58 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 150521 10:41:58 [ERROR] Unknown/unsupported storage engine: innodb 150521 10:41:58 [ERROR] Aborting 150521 10:41:58 [Note] /usr/sbin/mysqld: Shutdown complete
/usr/sbin/mysqld-akonadi --defaults-file=/home/john/.local/share/akonadi/mysql.conf --datadir=/home/john/.local/share/akonadi/db_data/ --socket=/home/john/.local/share/akonadi/socket-wopr/mysql.socket mysqldump --defaults-file=/home/john/.local/share/akonadi/mysql.conf --socket=/home/john/.local/share/akonadi/socket-wopr/mysql.socket --events --flush-privileges mysql --defaults-file=/home/john/.local/share/akonadi/mysql.conf --socket=/home/john/.local/share/akonadi/socket-wopr/mysql.socket mysqladmin --defaults-file=/home/john/.local/share/akonadi/mysql.conf --socket=/home/john/.local/share/akonadi/socket-wopr/mysql.socket
$ innochecksum -v /home/john/.local/share/akonadi/db_data/ibdata1 file /home/john/.local/share/akonadi/db_data/ibdata1 = 169869312 bytes (10368 pages)... checking pages in range 0 to 10367 page 0 invalid (fails log sequence number check) $ grep InnoDB ~/.local/share/akonadi/db_data/*/*.frm Binary file /home/john/.local/share/akonadi/db_data/akonadi/collectionattributetable.frm matches Binary file /home/john/.local/share/akonadi/db_data/akonadi/collectionmimetyperelation.frm matches Binary file /home/john/.local/share/akonadi/db_data/akonadi/collectionpimitemrelation.frm matches Binary file /home/john/.local/share/akonadi/db_data/akonadi/collectiontable.frm matches Binary file /home/john/.local/share/akonadi/db_data/akonadi/flagtable.frm matches Binary file /home/john/.local/share/akonadi/db_data/akonadi/mimetypetable.frm matches Binary file /home/john/.local/share/akonadi/db_data/akonadi/parttable.frm matches Binary file /home/john/.local/share/akonadi/db_data/akonadi/pimitemflagrelation.frm matches Binary file /home/john/.local/share/akonadi/db_data/akonadi/pimitemtable.frm matches Binary file /home/john/.local/share/akonadi/db_data/akonadi/resourcetable.frm matches Binary file /home/john/.local/share/akonadi/db_data/akonadi/schemaversiontable.frm matches $ /usr/sbin/mysqld-akonadi --defaults-file=/home/john/.local/share/akonadi/mysql.conf --datadir=/home/john/.local/share/akonadi/db_data/ --socket=/home/john/.local/share/akonadi/socket-wopr/mysql.socket --innodb-force-recovery=1 $ mysqldump --defaults-file=/home/john/.local/share/akonadi/mysql.conf --socket=/home/john/.local/share/akonadi/socket-wopr/mysql.socket --events --flush-privileges akonadi | gzip -1 > akonadi.sql.gz $ mysql --defaults-file=/home/john/.local/share/akonadi/mysql.conf --socket=/home/john/.local/share/akonadi/socket-wopr/mysql.socket mysql> drop database akonadi; Query OK, 11 rows affected (0.71 sec) mysql> create database akonadi; Query OK, 1 row affected (0.00 sec) mysql> \q $ gzip -dc akonadi.sql.gz | mysql --defaults-file=/home/john/.local/share/akonadi/mysql.conf --socket=/home/john/.local/share/akonadi/socket-wopr/mysql.socket akonadi ERROR 1030 (HY000) at line 42: Got error -1 from storage engine $ # line 42 is the first insert statement after creating the first table $ # read error message about how you can't alter tables with --force-recovery...OK, but you'll let me drop a database? NOT COOL BRO $ mysqladmin --defaults-file=/home/john/.local/share/akonadi/mysql.conf --socket=/home/john/.local/share/akonadi/socket-wopr/mysql.socket shutdown $ /usr/sbin/mysqld-akonadi --defaults-file=/home/john/.local/share/akonadi/mysql.conf --datadir=/home/john/.local/share/akonadi/db_data/ --socket=/home/john/.local/share/akonadi/socket-wopr/mysql.socket $ gzip -dc akonadi.sql.gz | mysql --defaults-file=/home/john/.local/share/akonadi/mysql.conf --socket=/home/john/.local/share/akonadi/socket-wopr/mysql.socket akonadi $ # success!
This is useful when you need to figure out which ID is root so you can change its password:
#!/bin/sh user_records=$(racadm get 'iDRAC.Users' | cut -d ' ' -f 1) for user_record in ${user_records}; do username=$(racadm get "${user_record}" | awk -F '=' '/^UserName=/ { print $2; }') printf "%-14s %s\n" ${user_record} ${username:-<unset>} done
The “2” in this example is the user ID, which you can find from the above enumeration loop (although “root” is usually ID 2).
racadm set 'iDRAC.Users.2.Password' "${new_drac_password}"
Update /etc/systemd/journald.conf
:
ForwardToConsole
to no
TTYPath
/dev/console
…perhaps set to /dev/tty1
?
Disable printing kernel messages to the console: dmesg -D
(https://superuser.com/a/793692/128124)
Kernel command line: loglevel=0
(https://stackoverflow.com/questions/16390004)
Edit /etc/rsyslog.conf
and remove lines that send things to the console; look for lines targeting /dev/console
or /dev/sysmsg
, for example.
https://serverfault.com/questions/392299/syslog-written-on-console