Understanding Data Usage


I’m trying to understand and calculate the amount of data I can potentially use within my boundaries.

Hologram.send("Hello World with slackapi");

Sending the string above, Is 25 bytes, but renders as 586 bytes total that adds to the usage meter.
Safe to say if we minus our string above with the total bytes get posted to usage, we get 561 default bytes that will apply to every message sent.

After I finish my calculation, the hologram bill with end up being more than my actual phone bill with unlimited data!

My question, is there a way to lower the amount of bytes?

Question #2. By going to https://dashboard.hologram.io/devices/logs page, it has a byte column that will show the payload size (in this case, “25” for the message above"), but why not show the WHOLE size of the message that will affect the actual bill? What is the point of this column?

Ex: Under Usage i’m at 4096B used, and when I add all the bytes up under the logs page it comes to 123 bytes.

I’m just trying to understand the system, please let me know if I’m doing something wrong on my end.
(BTW, running kitchsink github code)


Hi there!

What you’re seeing are the TCP/IP headers involved in sending a packet of data over the Internet.

The number of bytes listed next to each message, are (confusingly, we admit) not representative of the amount of the overhead bytes it takes to transmit data packets over a network, they just show the size of your data payload. You’re correct that this isn’t particularly useful information - we’ll soon be phasing out that column.

Ideally, we would be providing the total number of bytes, including TCP/IP overhead, that it takes to transmit each data message (we’re working on it!). However, at this time, the overall data usage for a sim is calculated independently from the messages as they show up in the Hologram cloud.

Here’s another post that goes into a little more detail about network overhead, and some differences between TCP and UDP transport protocols: SerialCloud.print data usage
Long story short, you can generally expect somewhere around 300-500 bytes of default TCP/IP overhead with each message.

Depending on your application and data needs, as well as hardware/software capability - there are a few strategies that can be used to try and reduce data overhead. The main recommendation for TCP (assuming your application permits) is to send fewer messages overall, and aggregate the data from several messages into a single send.

Obviously this strategy won’t always work for everyone’s application. We’d love to hear what kind of project you’re working on, hardware/libraries you’re using, how much data you expect to use - hopefully we can help get your costs down!


I have found these guidelines to be both helpful and accurate (the guidelines mentioned in the post @phogan refers to above). @TonyM and I have actually let a couple different sketches run over a “flying month.” We were then able to directly evaluate how many bytes were used and billed, versus how many we were sending as payload with each message. The overhead is directly tied to the number of messages sent, So cramming in as much data as possible into a single message is definitely the ticket to data happiness. The story has a happy outcome for our use; Hologram still comes out to be amazingly affordable.