I’ve been developing in Java since late 2009. It’s a good language, but I’m starting to wonder what kind of future it has. I’ve been using Java 8 since shortly after it came out, even though Java is currently on version 11. Java’s obviously still being developed, so why not move forward? A big part isn’t the infamous module system that launched in Java 9, and broke a lot of stuff. Part of it is the fact that Java is owned and controlled a by a company that seems more interested in rent-seeking off oa Java than doing anything innovative with it.
Java 8 introduced a lot of cool features, 1 of the most useful of which was the
stream() method. This nifty little method lets you treat an Iterable as a stream, enabling cool things like lambdas operating over a list. Related to
parallelStream(). This lets you group your stream into smaller streams that are run in, you guessed it, parallel. Specifically, your data is processed in a thread pool the size of the number of cores on your machine, minus the one running your app. That’s a handy piece of information you’re going to want to keep in mind before you start throwing this nifty little call around in your code.
Recently, we had a Kinesis consumer back up due to a deployment problem. Sadly, this problem went on for 2 days without us noticing, so we were pretty backed up. Now in theory, we should have been able to catch back up to real-time data sometime later that calendar day. In reality, we fell 2 days behind and it took us 2 days to catch up. That’s not acceptable, and I wanted to document the list of things we tried and how well they worked and to document our process for looking for the bottleneck. Obviously, there’s tons of room for improvement in the code, but there’s also a lot of room for improvement in how we were doing things before, and probably a lot of room for in how we went about trying to fix the issue.
I transferred teams at work recently, and spent about a week trying to get their code running on my laptop so I can do useful development work. This is in addition to trying to wrap my head around the existing codebase and figuring out how to test my changes. It’s not that the code is bad, it’s just getting all the ****ing components hooked up, communicating with each other, and playing nicely together an exercise in impossibility. Coming from a group that ran everything in AWS, going back to managing all the third-party services in a development environment makes me want to flip my desk over and start screaming about what the **** is wrong with everyone.
My last post about mocking a Netty server using Mockito worked for Netty 3.x, but the changes made in Netty 4.0 broke a lot of that work. After spending some quality time reading up on the changes from 3 to 4 and debugging my testing code, I got my mocked Netty server working with Netty 4.0, and now I’m posting it here in the hopes it helps anyone else who’s looking to mock a current Netty server for their unit tests.
A while back at work, we noticed a periodic issue where remote jobs just weren’t being run. That was being caused by the fact that the jobs weren’t actually being sent to the remote AWS instances that were meant to be running said jobs. Just why the jobs weren’t being sent out was a mystery, but a simple bouncing of either the remote servers or the main application itself seemed to get things moving again. In the meantime, we were off on a hunt for just why these jobs were no longer getting dispatched.
EDIT – This post was written for mocking a Netty 3.x server. For mocking a Netty 4.0 server, see this post.
While working on an app with my current job, I wound up touching some code that didn’t have any unit tests associated with it. Since we’re a small team (but growing), any automation in testing really helps (not to mention just being a good thing to do). The issue was the code was all in a request handler for a Netty server, which meant I needed a way of either running a Netty server during the Maven build process, or I needed to simulate 1 via some type of mocking library. Ultimately, I settled on the latter. Here’s how I did, and the things I learned along the way.
1 of the last projects I worked on at my previous job involved aggregating, storing, and querying log data into and from Elasticsearch (yes, I know that Logstash does that – and in reality I should have gone that route). That, along with some lookups on the data outside of the code, gave me a chance to start playing with Elasticsearch. After my brief experience with it, I can tell you there’s a lot of power in Elasticsesarch, but it’s going to take you a surprisingly longer to figure out how to tap it than you would expect.
Read any technical blog post that gives a deep dive into fixing any type of issue, and 1 thing you notice fairly quickly is that going through the logs is an important part of the process. Debug issues in any application you’re working on, and 1 thing you notice fairly quickly is whether or not your logs are any good. It’s a distinction that can make all the difference when the question of “What the deuce just happened?” rears its ugly head. Better logging can make your life easier, largely by telling you all about the state of what’s going on in your code so you can spend your time actually fixing and updating things instead of running down just what is going on in the first place.