Blog

Log Uncaught Errors in Scala/ Akka

17 Sep, 2015

At my work we have a long running service that’s using the Akka-Spray stack. Recently it crashed and we wanted to check its logfile to find some clue about the cause. But there was nothing there to help us.
Eventually we did find the cause, it had been an OutOfMemoryError which was thrown in one of the actors, and because this wasn’t caught anywhere (and it shouldn’t), it terminated the entire actor system.
It would have saved us some time if this error had been logged somewhere, so that is what this blog will be about.

System.err
Any exception or error is written to System.err automatically. System.err writes to the standard outputstream of the process, which normally is the console. I want to change this for our server, so at the startup of the server I redirect System.err to a custom outputstream of my own, called LoggingOutputStream, like this:
[code language=”scala”]
import org.apache.log4j.{Level, Logger}
System.setErr(new PrintStream(new LoggingOutputStream(Logger.getRootLogger, Level.ERROR), true))
[/code]
The LoggingOutputStream will write anything that would normally go to System.err to log4j’s RootLogger instead, with log level ERROR.
What’s left is the implementation of our LoggingOutputStream:
[code language=”scala”]
import java.io.{IOException, OutputStream}
import org.apache.log4j.{Priority, Category}
class LoggingOutputStream(category: Category, priority: Priority) extends OutputStream {
private val LINE_SEPARATOR = System.getProperty("line.separator")
private var closed = false
private var buffer = new Array[Byte](2048)
private var count = 0
override def close() {
flush()
closed = true
}
@throws(classOf[IOException])
override def write(b: Int) {
if (closed) {
throw new IOException("The stream has been closed!")
}
if (b == 0) {
return
}
if (count == buffer.length) {
// The buffer is full; grow it
val newBuffer = new Array[Byte](2 * buffer.length)
System.arraycopy(buffer, 0, newBuffer, 0, buffer.length)
buffer = newBuffer
}
buffer(count) = b.toByte
count += 1
}
override def flush() {
if (count == 0) {
return
}
// Don’t print out blank lines; flushing from PrintStream puts these out
if (!isBlankLine) category.log(priority, new String(buffer.slice(0, count)))
reset()
}
private def isBlankLine = (count == LINE_SEPARATOR.length) &&
((buffer(0).toChar == LINE_SEPARATOR.charAt(0) && count == 1)
|| (buffer(1).toChar == LINE_SEPARATOR.charAt(1)) && count == 2)
private def reset() {
count = 0
}
}
[/code]
Of course this solution is not specific for Akka, it will work in any Scala application.

guest
1 Comment
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Arnout Engelen
Arnout Engelen
6 years ago

This does pose a bit of a chicken-and-egg problem: you have not solved the root cause of your problem, which is that you didn’t monitor your process’s STDOUT/STDERR.
While your approach is a neat trick to shoehorn more of your logging into your common log format, what if for example initializing your logger fails?
At my current customer we send our logging to ElasticSearch via fluentd, which accepts JSON-over-TCP which can be emitted by logback. To also capture the applications’ stdout/stderr, we also use Dockers’ own fluentd connector, with some cute configuration to make sure metadata such as the service name and the appropriate loglevel correctly make it into ElasticSearch, even messages that bypass logback/slf4j.
To receive log messages emitted by logback itself during initialization on STDOUT, you must include a statusListener to your configuration, otherwise the logging can be silently dropped. You can do that with

Explore related posts