I wanted to share a quick piece on how to publish and subscribe JSON data with Apache Kafka using simple command line utilities. Here at Eventador, we use this set of simple command line tools quite frequently. The command line is very useful for both testing and development, but also sampling your stream to understand the data inside it.
I had a great time speaking with Brett Piatt on CyberTalk Radio. We chatted about data security but also talked a bit about fundamentals and the history behind how we arrived at this place in technology. We also talked a bit about the Kafka as a Service we built at Eventador.io.
This has been the least blogging I have done since 2013. That year I was crazy busy building ObjectRocket, similarly, this year I have been crazy busy building Eventador.io (as we now call it). I thought I would post a retrospective of some of the years highlights and things that come to mind in a few paragraphs to end the year on a high note.
Over the last few releases, PostgreSQL has developed awesome JSON functionality inside the database. That said, every once in a while you want to simply display that JSON in psql for easy viewing, working out a query, copying it to your buffer, etc. In 9.5 jsonb_pretty was included to solve this need.
If I have learned nothing in the last 5 years it’s play where you are at your best.
Know what you are good at, and do that. Resist playing in areas where you don’t add
a lot of value, maximize your time and efforts around the things you do well. In essence
double down on your talents, and leverage your passions.
I have attended lots of PostgreSQL meetups and conferences, but never spoke at one. I always wanted to, but never had time or the opportunity. So I am very excited to be speaking at PGConf SV on November 18th 2015.
For the last six months or so I have been exclusively using Atom as my editor. If you aren’t familiar, Atom is the editor created by Github. It’s simple, easily hackable, and generally awesome. It also doesn’t seem to be as slow as it used to be.
I am very excited to be speaking at Percona Live in Amsterdam this year. In the last couple years I have been attending more and more Percona Live events and speaking at the conferences.
I love data driven projects. I am also a WWII nerd. This visualization brings some grim realities to the front and center, but it also gives a perspective often lost. My take away is the absolutely staggering human cost of pushing back Japan and ridding the world of the Nazi’s. This visualization just blows my mind.
I had the privilege to be on the Partially Derivative podcast last week. I was turned on to these guys a month or two ago, and have been an addict ever since. A bunch of data nerds here in the ObjectRocket offices have also become addicts. I suspect it has something to do with the beautiful marriage between data and beer!
On March 24th 2015 I presented to the Austin MongoDB user group. I had a blast, but I think what really made is so fun is that I am very excited about MongoDB 3.0 and Wired Tiger. It was awesome to share some of the testing, research, and benchmarks I have done.
In the last few months I have spent a lot of time load testing MongoDB for various reasons. From testing compression ratios in TokuMX and hardware platforms for ObjectRocket/Rackspace to various storage engines in the new MongoDB 3.0 release. My new role has me making lots of opinions about various things, and I like to be data and fact driven about them.