LTE-M & NB-IoT LTE Modem Data Rate

Howdy,

Does either the Python SDK or Hologram CLI allow for real-time data-rate estimation? If not, is there some way to quantitatively correlate signal strength with data rate (even if it’s a rough estimate)?

For Cat-M1 and likely also NB-IoT there is not really a correlation between data rate and signal strength unless you are at very low signal strengths (like 1-bar equivalent). Note when signal strenght is poor it is also quite variable so you might have -110dBm signal one second and -130dBm the next second so even if this did correlate with speed at the low end, the measurement would be noisy and estimation unreliable. Also RSRQ is likely a better estimation than signal strength, again only relevant when both are low.

Data rate is mostly limited by the underlying technology (1Mb-379kb/s for Cat-M1, ~50kb/s for NB-IoT). I would also suspect other issues such as latency (can be quite high on IoT based tech) and its affects on TCP or your overall speed estimation if payload is small.

In summary, I dont think Cat-M1 or NB-IoT are good technology choices if reliable bandwidth is needed. These are more fit for periodic “heartbeat” like messages. Or non realtime applications where if a message takes 4s vs 0.4s to transmit it doesn’t really matter.

If you need performance closer to wifi / mobile phones then look at LTE-1, 2, 4, etc. modems.

Thank you! This helped a lot. Do you happen to know why there is that latency in the system and if it can be decreased somehow? We are attempting to use the modem in a rather unconventional manner: live stream video data. The data rate is fine for what we are doing, but the latency is bogging down the streaming. No matter how much data I send in each packet, the modem always takes about 2 seconds to send it and takes full control of the processor. If it could be shaved down to even a more consistent ~0.5s, we could make due.

Video over Cat-M may not be ideal no matter what, but how are you sending the packets? Are you using ppp and then writing to a socket?

I’m currently just using the standard sendMessage function and sending the compressed binary stream in packets of about 1.5 kB. In terms of data rate, the stream should be more than okay, because I can get decently good quality 320x240 at 20 fps video compressed to 100 kbps, but with the roughly 2 second latency and a max packet size of ~10 kB, that limits the true data rate to 40 kbps which is a tough margin to hit, even for very poor video. How does the ppp and socket scheme work/differ from the standard sendMessage?

Oh so you’re actually sending the messages into our cloud messaging service. Are you then routing it back to you with a webhook?
This is not designed for real time applications and will probably always have a second or two of latency. What’s going on when you call sendMessage is that we start up a new data session on the modem, open a socket to our cloud, write the message and disconnect.
On our cloud those messages end up in a queue and then get processed and dispatched to the correct routes.
If in the other hand you open a ppp connection so that you can write to a socket directly then you could send the data straight to your own server with nothing in the way and don’t have to start up and tear down the connection each time.
When you run sudo hologram network connect this is what it’s doing. It just creates a network interface as if you were on wifi.
This would probably be a better fit for video and you might want to give it a shot. Search on these forums for ppp for more information and read up on socket programming

Oh awesome, thank you! I will definitely look into that and the operation of it makes a little more sense now. Also, yeah, a webhook afterward was the intention.

I currently have a stable version of the code just streaming over a UDP socket and displaying with Gstreamer on the backend. How much different will the ppp/socket scheme be (other than the fact that it will be TCP)?

UDP works fine too (though possibly might see more dropped packets). It’s just like any other network interface so if you already have network streaming code it would basically be the same.

Though you probably aren’t going to see great data rates with M1 in general. You’ll have to see how it goes.

@rankner7

The real solution if you are struggling with latency is multithreading. If you are using Python you are on an operating system that can take care of context switching for you. Just create a separate thread and a MT-safe queue (simple libraries available to do this). One thread runs your video, one sends data.

This is especially useful as the data sending is mostly just waiting which leaves the processor free for the video task.

Then all you have to worry about is average speed (again which includes latency, just that latency doesnt lock your python code)

EDIT: I should say “The real solution if you are struggling with latency blocking your cpu is multi threading”. multi threading doesnt fix latency but can nearly eliminate one of the big costs.