Sunday, December 7, 2025
HomeBitcoinbitcoin core improvement - A query on CNode class information members

bitcoin core improvement – A query on CNode class information members


The query is from 2015; some issues have modified since then.

CNetMessage is a transport protocol agnostic message container. It accommodates the obtained message information (DataStream), time of message receipt, payload dimension, and different data. It’s used to decompose messages from the community. See the next code:

bool CNode::ReceiveMsgBytes(Span<const uint8_t> msg_bytes, bool& full)
{
    full = false;
    const auto time = GetTime<std::chrono::microseconds>();
    LOCK(cs_vRecv);
    m_last_recv = std::chrono::duration_cast<std::chrono::seconds>(time);
    nRecvBytes += msg_bytes.dimension();
    whereas (msg_bytes.dimension() > 0) {
        // take in community information
        if (!m_transport->ReceivedBytes(msg_bytes)) {
            // Critical transport downside, disconnect from the peer.
            return false;
        }

        if (m_transport->ReceivedMessageComplete()) {
            // decompose a transport agnostic CNetMessage from the deserializer
            bool reject_message{false};
            CNetMessage msg = m_transport->GetReceivedMessage(time, reject_message);
            if (reject_message) {
                // Message deserialization failed. Drop the message however do not disconnect the peer.
                // retailer the dimensions of the corrupt message
                mapRecvBytesPerMsgType.at(NET_MESSAGE_TYPE_OTHER) += msg.m_raw_message_size;
                proceed;
            }

            // Retailer obtained bytes per message sort.
            // To forestall a reminiscence DOS, solely enable identified message sorts.
            auto i = mapRecvBytesPerMsgType.discover(msg.m_type);
            if (i == mapRecvBytesPerMsgType.finish()) {
                i = mapRecvBytesPerMsgType.discover(NET_MESSAGE_TYPE_OTHER);
            }
            assert(i != mapRecvBytesPerMsgType.finish());
            i->second += msg.m_raw_message_size;

            // push the message to the method queue,
            vRecvMsg.push_back(std::transfer(msg));

            full = true;
        }
    }

    return true;
}

After receiving the message, we push it to vRecvMsg after which put it in a queue to be processed. In ProcessMessage, we get the message information (DataStream) and course of it in response to the message sort.

bool PeerManagerImpl::ProcessMessages(CNode* pfrom, std::atomic<bool>& interruptMsgProc)
{
    AssertLockHeld(g_msgproc_mutex);

    PeerRef peer = GetPeerRef(pfrom->GetId());
    if (peer == nullptr) return false;

    {
        LOCK(peer->m_getdata_requests_mutex);
        if (!peer->m_getdata_requests.empty()) {
            ProcessGetData(*pfrom, *peer, interruptMsgProc);
        }
    }

    const bool processed_orphan = ProcessOrphanTx(*peer);

    if (pfrom->fDisconnect)
        return false;

    if (processed_orphan) return true;

    // this maintains the order of responses
    // and prevents m_getdata_requests to develop unbounded
    {
        LOCK(peer->m_getdata_requests_mutex);
        if (!peer->m_getdata_requests.empty()) return true;
    }

    // Do not trouble if ship buffer is just too full to reply anyway
    if (pfrom->fPauseSend) return false;

    auto poll_result{pfrom->PollMessage()};
    if (!poll_result) {
        // No message to course of
        return false;
    }

    CNetMessage& msg{poll_result->first};
    bool fMoreWork = poll_result->second;

    TRACE6(web, inbound_message,
        pfrom->GetId(),
        pfrom->m_addr_name.c_str(),
        pfrom->ConnectionTypeAsString().c_str(),
        msg.m_type.c_str(),
        msg.m_recv.dimension(),
        msg.m_recv.information()
    );

    if (m_opts.capture_messages) {
        CaptureMessage(pfrom->addr, msg.m_type, MakeUCharSpan(msg.m_recv), /*is_incoming=*/true);
    }

    strive {
        ProcessMessage(*pfrom, msg.m_type, msg.m_recv, msg.m_time, interruptMsgProc);
        if (interruptMsgProc) return false;
        {
            LOCK(peer->m_getdata_requests_mutex);
            if (!peer->m_getdata_requests.empty()) fMoreWork = true;
        }
        // Does this peer has an orphan able to rethink?
        // (Observe: we might have offered a father or mother for an orphan offered
        //  by one other peer that was already processed; in that case,
        //  the additional work might not be observed, presumably leading to an
        //  pointless 100ms delay)
        if (m_orphanage.HaveTxToReconsider(peer->m_id)) fMoreWork = true;
    } catch (const std::exception& e) {
        LogPrint(BCLog::NET, "%s(%s, %u bytes): Exception '%s' (%s) caughtn", __func__, SanitizeString(msg.m_type), msg.m_message_size, e.what(), typeid(e).title());
    } catch (...) {
        LogPrint(BCLog::NET, "%s(%s, %u bytes): Unknown exception caughtn", __func__, SanitizeString(msg.m_type), msg.m_message_size);
    }

    return fMoreWork;
}

Within the case of vSendMsg, it is a vector of CSerializedNetMsg. CSerializedNetMsg construction is straightforward. It holds the message information (a vector of unsigned char) and its sort. As its title signifies, it represents the serialized message. You’ll be able to discover within the codebase that NetMsg::Make is usually used to assemble it. This operate accepts a string parameter representing the message sort and some other parameter besides that might be used to compose the msg information.

namespace NetMsg {
    template <typename... Args>
    CSerializedNetMsg Make(std::string msg_type, Args&&... args)
    {
        CSerializedNetMsg msg;
        msg.m_type = std::transfer(msg_type);
        VectorWriter{msg.information, 0, std::ahead<Args>(args)...};
        return msg;
    }
} // namespace NetMsg

Now, what I perceive concerning the distinction between CSerializedNetMsg and CNetMessage is:

  1. CSerializedNetMsg appears lighter.
  2. CNetMessage has extra members (m_time, m_message_size, m_raw_message_size)
  3. Just one CNetMessage object for a similar message will exist. CSerializedNetMsg has a particular technique for copies.

Though they appear related, they’ve particular traits for his or her functions. For instance, it is very important take care of obtained messages with CNetMessage, amongst different causes, to know precisely the dimensions of the information we obtain to trace the method queue dimension. Copy from CSerializedNetMsg could be helpful when sending the identical message for multiple node.

    m_connman.ForEachNode([this, pindex, &lazy_ser, &hashBlock](CNode* pnode) EXCLUSIVE_LOCKS_REQUIRED(::cs_main) {
        AssertLockHeld(::cs_main);

        if (pnode->GetCommonVersion() < INVALID_CB_NO_BAN_VERSION || pnode->fDisconnect)
            return;
        ProcessBlockAvailability(pnode->GetId());
        CNodeState &state = *State(pnode->GetId());
        // If the peer has, or we introduced to them the earlier block already,
        // however we do not assume they've this one, go forward and announce it
        if (state.m_requested_hb_cmpctblocks && !PeerHasHeader(&state, pindex) && PeerHasHeader(&state, pindex->pprev)) {

            LogPrint(BCLog::NET, "%s sending header-and-ids %s to see=%dn", "PeerManager::NewPoWValidBlock",
                    hashBlock.ToString(), pnode->GetId());

            const CSerializedNetMsg& ser_cmpctblock{lazy_ser.get()};
            PushMessage(*pnode, ser_cmpctblock.Copy());
            state.pindexBestHeaderSent = pindex;
        }
    });

About vRecvGetData, it was eliminated in #19911. Now we’ve got m_getdata_requests as a member of Peer, when a node receives a GETDATA message, it shops the INVs into m_getdata_requests. It’s used to regulate what a node requested of us (e.g., a transaction).

RELATED ARTICLES

Most Popular

Recent Comments