I’m trying to understand the actual cost of data sent. With my Dash now up and running, I’ve been testing it for a couple of days at a time, and a couple of questions have presented themselves.
I’ve noticed that when I send 5 bytes of data (literally sending a 5-byte ping: “Ping!”), the dashboard logs it as 8 bytes. I assume there may be some overhead involved with that, maybe a stop bit or two, or field descriptor, etc. Either way, it would just be good to know what the calculation is behind that. Same thing happens when I send a 65 byte string, it’s logged as 88.
How does my transmitted (useful part of the data) translate to what is actually logged, for total data that I get charged for? As with the other question above, it seems there’s some overhead that is necessarily involved on a per-message basis – completely understandable, I would just like to know what that overhead is, how it’s broken down, etc., so I can come up with a good estimate for volume field deployment costs, and what I’m likely to see in data charges based both on message length and frequency of transmission.
Are any data charges incurred, say over a 24 hour period, if the board is alive but not transmitting anything?
I know you’re working on bringing better resolution, timing, etc. to the dashboard re. usage, but understanding what goes into the calculation is the key I’m trying to understand at this point.
Thanks as always!