Dave Laird
2004-12-24 13:41:52 UTC
Good morning, everyone...
Yesterday my *second* system compromise in twelve-plus years took place,
and it wasn't even my system, but one of a dozen which I maintain for
other people and companies. The Bob-a-Do, who originally taught me how to
deal with such things as compromises, taught me well, if not often, if not
*loudly* about what to do in such circumstances, and thus his training,
combined with twelve years' experience at managing Linux boxes, all paid
off dividends.
Yesterday's events weren't actually a compromise, at least if you assume
that a compromise means a user has been able to obtain escalated
privileges and thus gain control over or damage a vital server, for
yesterdays events were what are clinically called a SEGFAULT. That means
something is broken, inducing a stoppage of work, typically at the kernel
level. Yesterday's event actually started a long time ago when the client
decided not to maintain their system with the latest and greatest updates,
and thus left themselves exposed to risk.
My *first* compromise, which took place within the third year I ran a
Linux server, was also because I neglected to update my system in a timely
manner. I can still remember having my ears burned over that one, a mail
configuration error which resulted in all kinds of dire things one never
mentions in public. ;-] Even worse, I got caught at it in public, and thus
learned *never* to configure any process or server without first knowing
what the hell I'm doing BEFORE doing it.
In yesterday's event, months before, I had already sent several messages
to the client regarding a number of Python and PhP vulnerabilities, nearly
all of which have been adequately noted by CERT and various other network
security gurus around the world. While I am certain, until yesterday, the
client assumed that because they are in Spokane, Washington, none of the
world threat assessments *really* applied to them, I am also certain they
hold a much different view today, as they are suddenly very aware of a
number of issues.
Like many attempts, either successful or failed, to compromise machines,
this one came at an inopportune time of the day. All kinds of dire things
are tried between midnight and dawn, in the network world as I see it.
Three forty one AM Pacific Standard Time is as good as any other time for
an exploit, because I've seen exploits attempted nearly every hour of the
day. However, this was *my* exploit; it was *my* watch, and thus, despite
the client's refusal to update their software, it was my responsibility.
I was trundled up in my bed fast asleep, with Suzie's warm body curled in
my arms, when the alarms in my office started their persistent cacophony
of beeps, whistles and farting noises until I finally rose from my bed to
see what the hell the noise was about.
"What in the HELL is going on in here?" I angrily asked my workstation,
quenching the noise makers long enough to quickly scan the monitor.
As fortune would have it, the client's machine had gone offline under very
mysterious circumstances. While some system processes, such as SSHD, cron
and the logger, were all still running, Apache had died, and mail no
longer was responsive. Since I could neither gain control of the machine
in question, and since at that hour of the day, nothing seemed to be
working, I grabbed copies of the log files for everything and made a pot
of coffee.
In the pre-Bob-a-Do days, I would have simply rebooted the computer, but I
learned never to do that. Instead, I disconnected the box from the outside
world by downing the ethernet card, coincidentally cutting myself off, and
simply began studying the log files to see what had happened. An hour or
less later, I discovered a number of POST statements in the Apache log
file that consisted of Python code, all designed to attack the web mail
interface. Huh. All were successful because the version of Python used to
run the tricky little web mail program was woefully outdated (I *knew*
that), and had several vulnerabilities to its name.
I spent a good part of yesterday gathering the Python libraries and tools
needed to update *just* that part of their Internet presence. Then, with
an eye to the fact that I had only cured *part* of the greater problem, I
ordered the system rebooted and applied all the Python and web mail
patches to their existing software. Stuff now works, the client is happy
once again, but they have been informed they *must* update their entire
system if they are to expect flawless performance in the future.
As of this morning, the approvals have arrived to truly fix the issue,
through a systematic *complete* updating of all their software. Ultimately
I am off the hook when it comes to placing blame, since I had preached to
them in the past about the need to upgrade their software. Ultimately,
however, I was in charge when the system went down. <shrug> Blame is like
dog turds in the park... everyone gets a little, at one time or another,
whether they want some or not.
I doubt if my original tutor, the Bob-a-Do, would approve of my short-term
solution, but I am also certain had he been in my shoes at the time, he
would have approved of my methods and the post-mortem I performed before I
turned the damned thing back on.
To the Bob-a-Do's credit, I only went into panic mode that first minute I
stood before the monitor swearing up a blue streak. The rest of the time I
was simply mad as hell... in a manner approved by the Bob-a-Do, that is.
Dave
Yesterday my *second* system compromise in twelve-plus years took place,
and it wasn't even my system, but one of a dozen which I maintain for
other people and companies. The Bob-a-Do, who originally taught me how to
deal with such things as compromises, taught me well, if not often, if not
*loudly* about what to do in such circumstances, and thus his training,
combined with twelve years' experience at managing Linux boxes, all paid
off dividends.
Yesterday's events weren't actually a compromise, at least if you assume
that a compromise means a user has been able to obtain escalated
privileges and thus gain control over or damage a vital server, for
yesterdays events were what are clinically called a SEGFAULT. That means
something is broken, inducing a stoppage of work, typically at the kernel
level. Yesterday's event actually started a long time ago when the client
decided not to maintain their system with the latest and greatest updates,
and thus left themselves exposed to risk.
My *first* compromise, which took place within the third year I ran a
Linux server, was also because I neglected to update my system in a timely
manner. I can still remember having my ears burned over that one, a mail
configuration error which resulted in all kinds of dire things one never
mentions in public. ;-] Even worse, I got caught at it in public, and thus
learned *never* to configure any process or server without first knowing
what the hell I'm doing BEFORE doing it.
In yesterday's event, months before, I had already sent several messages
to the client regarding a number of Python and PhP vulnerabilities, nearly
all of which have been adequately noted by CERT and various other network
security gurus around the world. While I am certain, until yesterday, the
client assumed that because they are in Spokane, Washington, none of the
world threat assessments *really* applied to them, I am also certain they
hold a much different view today, as they are suddenly very aware of a
number of issues.
Like many attempts, either successful or failed, to compromise machines,
this one came at an inopportune time of the day. All kinds of dire things
are tried between midnight and dawn, in the network world as I see it.
Three forty one AM Pacific Standard Time is as good as any other time for
an exploit, because I've seen exploits attempted nearly every hour of the
day. However, this was *my* exploit; it was *my* watch, and thus, despite
the client's refusal to update their software, it was my responsibility.
I was trundled up in my bed fast asleep, with Suzie's warm body curled in
my arms, when the alarms in my office started their persistent cacophony
of beeps, whistles and farting noises until I finally rose from my bed to
see what the hell the noise was about.
"What in the HELL is going on in here?" I angrily asked my workstation,
quenching the noise makers long enough to quickly scan the monitor.
As fortune would have it, the client's machine had gone offline under very
mysterious circumstances. While some system processes, such as SSHD, cron
and the logger, were all still running, Apache had died, and mail no
longer was responsive. Since I could neither gain control of the machine
in question, and since at that hour of the day, nothing seemed to be
working, I grabbed copies of the log files for everything and made a pot
of coffee.
In the pre-Bob-a-Do days, I would have simply rebooted the computer, but I
learned never to do that. Instead, I disconnected the box from the outside
world by downing the ethernet card, coincidentally cutting myself off, and
simply began studying the log files to see what had happened. An hour or
less later, I discovered a number of POST statements in the Apache log
file that consisted of Python code, all designed to attack the web mail
interface. Huh. All were successful because the version of Python used to
run the tricky little web mail program was woefully outdated (I *knew*
that), and had several vulnerabilities to its name.
I spent a good part of yesterday gathering the Python libraries and tools
needed to update *just* that part of their Internet presence. Then, with
an eye to the fact that I had only cured *part* of the greater problem, I
ordered the system rebooted and applied all the Python and web mail
patches to their existing software. Stuff now works, the client is happy
once again, but they have been informed they *must* update their entire
system if they are to expect flawless performance in the future.
As of this morning, the approvals have arrived to truly fix the issue,
through a systematic *complete* updating of all their software. Ultimately
I am off the hook when it comes to placing blame, since I had preached to
them in the past about the need to upgrade their software. Ultimately,
however, I was in charge when the system went down. <shrug> Blame is like
dog turds in the park... everyone gets a little, at one time or another,
whether they want some or not.
I doubt if my original tutor, the Bob-a-Do, would approve of my short-term
solution, but I am also certain had he been in my shoes at the time, he
would have approved of my methods and the post-mortem I performed before I
turned the damned thing back on.
To the Bob-a-Do's credit, I only went into panic mode that first minute I
stood before the monitor swearing up a blue streak. The rest of the time I
was simply mad as hell... in a manner approved by the Bob-a-Do, that is.
Dave
--
Dave Laird (***@kharma.net)
The Used Kharma Lot
Web Page: http://www.kharma.net updated 11/24/2004
Usenet news server : news://news.kharma.net
Fortune Random Thought For the Minute
Know thyself. If you need help, call the C.I.A.
Dave Laird (***@kharma.net)
The Used Kharma Lot
Web Page: http://www.kharma.net updated 11/24/2004
Usenet news server : news://news.kharma.net
Fortune Random Thought For the Minute
Know thyself. If you need help, call the C.I.A.