Packet Delay Trace


Viewing 4 posts - 1 through 4 (of 4 total)
  • Author
  • #8345

    Hi Team,

    Is it possible with this simulator to trace the delay experienced for each of the IP packets that the UE is receiving or sending. I know that the radio network does not deal with data as mere IP packets, but I just wanted to know if there is support for such a feature.

    I am trying to emulate the delays that a packet sent or received by the UE is experiencing due to the various radio conditions.

    If it is not possible, do you know a good solution to introduce the link rate variation for each TTI from this simulator into a Linux machine?



    in our link level simulator, the transmission of data symbols over a wireless channel is implemented very detailed on a sample basis. In each subframe (TTI), a certain number of random bits is physical layer processed to obtain a complex valued time domain signal which is transmitted over a channel. At the receiver side, the received signal is again processed to obtain received bits again. Within this physical layer processing, a CRC check is performed. At the receiver, this check tells if a packet is received correctly or not. In case HARQ is used (downlink only) re-transmissions may occur, leading to an actual packet delay.

    In the simulation results you will find how many bits per TTI were transmitted and if the decoding failed or not. For a certain size of a packet, you could user this information to extract how long it takes to transmit a certain package on average, over a certain channel model at a certain SNR.

    On downlink, there are also some traffic models implemented, however, they are not very evolved.

    best regards


    Dear Stephan,

    Thank you for your reply. I understand that the LTE simulator gives how many bits were transmitted in each TTI for a particular UE. Does this number of bit represent the bits transferred out of a user IP packet only or the total bits transferred from one physical layer PDU. As a physical layer PDU may contain more than one IP packet, if X bits were transferred in a TTI, does it mean that x bits from the IP packet were transferred or x bits from the physical layer PDU were transferred?

    Also, there can be the case where a UE may not be served by the simulator due to loaded condition (scheduling delay). In that case, what would be the output for the number of bits transferred in those TTI.

    Is it possible to tweak the scheduler algorithm to implement prioritized scheduling of some kind so as to improve packet latency?



    in general the transmitted bits are generated randomly under a full buffer assumption. This means, it is assumed that there is always data to transmit if no traffic model is used. The number of bits corresponds to the currently employed modulation and coding scheme and also depends on the number of active spatial streams (number of transmitted codewords). So the number of transmitted bits you find in the results are just the physical layer throughput per TTI and SNR. If there is no traffic model employed, which is the default situation in our link level simulator, there is no such thing as higher layer packed or a loaded network condition.

    In the Vienna LTE-A Link Level Downlink Simulator there are some traffic models implemented that generate traffic according to some service. In this case, there is no more full buffer assumption, because there is not always data to transmit. However, as I said, these traffic models are not very mature. As you suggested you would have to modify and improve these models for your needs. This, of course, means changes in the traffic model and probably also in the scheduler.


Viewing 4 posts - 1 through 4 (of 4 total)
  • You must be logged in to reply to this topic.