ASP.NET Security Consultant

I'm an author, speaker, and generally a the-way-we've-always-done-it-sucks security guy who specializes in web technologies and ASP.NET.

I'm a bit of a health nut too, having lost 50 pounds in 2019 and 2020. Check out my blog for weight loss tips!

Logging for Security vs. Debugging

Published on: 2018-07-25

One recent development in ASP.NET is the addition of logging within the framework. On the surface, this sounds like a great idea. As I've written before, you can't catch the bad guys if you don't see them, and (good) logging is a great way to see bad guys. There's one problem, though: the logging that Microsoft has implemented (and the logging present in most, if not all, web frameworks) doesn't really help see bad guys, much less stop them from doing damage. It should be no surprise, then, that there's an average of 99 days between a security breach and detection. It's reasonable to assume that catching criminals sooner would lower the cost of each breach, which average $17 million each. (Both figures come from the Security Practice Development Playbook from Microsoft.) What can we do to lower our risk?

Security vs. Development/Debugging

First, we should acknowledge that there's a difference between what needs to get logged from a developer's perspective vs. a security perspective. There is some overlap, of course. A request failing because of a CSRF validation error would be of interest to both a developer and a security professional. But a security person is primarily interested in keeping a system safe. He/she wouldn't necessarily care if a particular method is called, where developers log this information frequently to debug the program. On the other hand, a developer looking to ensure the system works smoothly logs information primarily to debug any errors that occur. With this in mind, many developers don't care why certain data validations fail as long as dirty data doesn't get into the database, while a security person would care very much since that could be a sign that a malicious actor is trying to get into the system.

There are other examples as well. A security professional needs to know who is causing problems in order to take remedial action against the person. A software developer needs to know what is causing a problem in order to prevent errors from happening again.

How That Translates to What Gets Logged

Because of these differences, what gets logged for security vs. development/debugging should also be different. A security log would contain the action, the potential reason for risk, and the source of the error (e.g., a user name, IP address, and other means to identify the potential hacker). A debugging log would contain the stack trace and error message, but it would not necessarily identify who performed the action that caused the error or track the cause in a machine-readable way. (At least not yet—I sincerely hope that someone somewhere is working on a machine learning algorithm that looks at error messages to create self-correcting code.) As such, the typical logging levels (Debug, Info, Warn, Error, Fatal, etc.) are mostly useless for security purposes. To make that clearer, here are a couple of examples:

What is logged if an attacker tries to implement a XSS attack and the system successfully prevents it by altering the data so it is no longer dangerous? Is that an error because the data went through? Is it merely logged with a "Debug" priority in case the someone was trying to put in legitimate data? Is the data change not logged at all because the system successfully removed the potentially malicious characters?

How about a failed login attempt? This could be merely "information" from a logging perspective, until you have several of them in a short period, in which case the priority level becomes higher.

And this gets even more important when you realize that many production systems will not log certain events with a low priority at all to save on space in the error log. Imagine logging a failed password attempt as an "Error" just because that's the only way to get it to show up in the production error log. That's a bit ridiculous!

Why it Matters

With all that said, it's important to note that in ASP.NET, Microsoft has chosen to log events from a developer/debugging perspective. That is useful and important, to be sure. But if you migrated to ASP.NET Core for its superior logging for security purposes, you're going to be sorely disappointed. This is especially important if you intended to use that logging for PCI or HIPAA compliance – the Microsoft logging mechanism just wasn't built with security monitoring in mind. Many errors that would occur because of a potential attack aren't logged (because the system successfully blocked them), and many errors that are logged would not be caught in production (because they are too low a priority).

Because of all this, I believe we need to have separate logs—one explicitly for debugging and one explicitly for security events. The security log should log types of events along with threat levels, rather than severity. Finally, the dedicated security log levels should not be configurable for different environments—good hackers will try to remain undetected and will try to avoid obvious attacks. If you only log obvious attacks, you will not be able to detect better intrusion attempts.

How to Make This Easier to Implement

Unfortunately, in order to create a true security log in ASP.NET, you would need to implement some logging deep within the framework since most of the useful security information is lost by the time the custom code sees it. My ASP.NET Security Enhancer does, in fact, log more detailed security events to a dedicated security log, rather than mixing everything together as Microsoft does in its ASP.NET objects. There is very little configuration or setup, so adding true security logging should not be hard to do. Please contact me for more details!