feat: Added Transaction size calculation methods#1795
feat: Added Transaction size calculation methods#1795manishdait wants to merge 26 commits intohiero-ledger:mainfrom
Conversation
Codecov Report❌ Patch coverage is @@ Coverage Diff @@
## main #1795 +/- ##
==========================================
+ Coverage 93.68% 93.72% +0.04%
==========================================
Files 145 145
Lines 9426 9456 +30
==========================================
+ Hits 8831 8863 +32
+ Misses 595 593 -2 🚀 New features to boost your workflow:
|
0da2438 to
1825462
Compare
|
Hi, this is MergeConflictBot. Please resolve these conflicts locally and push the changes. Quick Fix for CHANGELOG.md ConflictsIf your conflict is only in CHANGELOG.md, you can resolve it easily using the GitHub web editor:
For all other merge conflicts, please read: Thank you for contributing! |
|
Hi @manishdait, This pull request has had no commit activity for 10 days. Are you still working on it?
If you're no longer working on this, please comment Reach out on discord or join our office hours if you need assistance. From the Python SDK Team |
|
Hi @manishdait, this is InactivityBot 👋 This pull request has had no new commits for 21 days, so I'm closing it and unassigning you from the linked issue to keep the backlog healthy. If you're no longer interested, no action is needed. Tip: You can comment If you'd like to continue working on this later, feel free to comment |
|
Hi @manishdait, this is InactivityBot 👋 This pull request has had no new commits for 22 days, so I'm closing it and unassigning you from the linked issue to keep the backlog healthy. If you're no longer interested, no action is needed. Tip: You can comment If you'd like to continue working on this later, feel free to comment |
1825462 to
c6b94fe
Compare
|
Hi, this is WorkflowBot.
|
fbac757 to
1489840
Compare
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughAdds transaction size calculation properties ( Changes
Sequence Diagram(s)mermaid Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes 🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
src/hiero_sdk_python/transaction/transaction.py (1)
419-432:⚠️ Potential issue | 🟠 MajorReturn type of
build_transaction_body()must betransaction_pb2.TransactionBody.Line 419 declares the return type as
transaction_pb2.Transaction, but the docstring and all subclass implementations usetransaction_pb2.TransactionBody. This creates a type contract violation: every subclass override (transfer_transaction.py, batch_transaction.py, etc.) declaresTransactionBody, making them incompatible with the base signature.Suggested fix
- def build_transaction_body(self) -> transaction_pb2.Transaction: + def build_transaction_body(self) -> transaction_pb2.TransactionBody:src/hiero_sdk_python/consensus/topic_message_submit_transaction.py (1)
273-287:⚠️ Potential issue | 🔴 CriticalBLOCKER: carry
secondswhen chunkvalid_start.nanosoverflows.Lines 280-283 add
idirectly tobase_timestamp.nanoswithout checking for overflow. ProtobufTimestamp.nanosmust be in the range[0, 999_999_999]. With the defaultmax_chunks=20, if a transaction is created with avalid_startnear the end of a second (e.g.,nanos=999_999_980), chunk indices approaching 20 will cause the nanos value to exceed the valid range, resulting in invalid or rejected transaction IDs on the network.Suggested fix
- chunk_valid_start = timestamp_pb2.Timestamp( - seconds=base_timestamp.seconds, - nanos=base_timestamp.nanos + i - ) + total_nanos = base_timestamp.nanos + i + chunk_valid_start = timestamp_pb2.Timestamp( + seconds=base_timestamp.seconds + total_nanos // 1_000_000_000, + nanos=total_nanos % 1_000_000_000, + )
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 9f37e713-f101-4e75-99e8-ac95e67c3a50
📒 Files selected for processing (8)
CHANGELOG.mdsrc/hiero_sdk_python/consensus/topic_message_submit_transaction.pysrc/hiero_sdk_python/file/file_append_transaction.pysrc/hiero_sdk_python/transaction/transaction.pytests/integration/file_append_transaction_e2e_test.pytests/integration/topic_message_submit_transaction_e2e_test.pytests/unit/topic_message_submit_transaction_test.pytests/unit/transaction_test.py
1489840 to
2a2bd41
Compare
There was a problem hiding this comment.
Actionable comments posted: 9
♻️ Duplicate comments (6)
src/hiero_sdk_python/transaction/transaction.py (1)
261-261:⚠️ Potential issue | 🔴 CriticalDon't route
freeze()through subclassfreeze_with(None).This now dispatches manual freezing into every override with
client=None.TokenCreateTransaction.freeze_with()still readsclient.operator_account_idbefore delegating whenauto_renew_periodis set, sofreeze()can now raiseAttributeErroron that path. Keepfreeze()on a non-virtual path, or make every override null-safe before it touchesclient.As per coding guidelines:
Any change to public methods (freeze, freeze_with, sign, execute, to_bytes, from_bytes, etc.) includes backwards-compatibility analysis.src/hiero_sdk_python/file/file_append_transaction.py (2)
300-314:⚠️ Potential issue | 🔴 CriticalCarry nanosecond overflow into
secondsfor later chunks.
base_timestamp.nanos + ican exceed999_999_999, which produces an invalid protobuf timestamp for the chunk transaction ID. Multi-chunk append near a second boundary will then fail to freeze or submit correctly.As per coding guidelines: `valid_start timestamps for each chunk are spaced out correctly (e.g., incremented by at least 1 nanosecond)`.🛠️ Proposed fix
- chunk_valid_start = timestamp_pb2.Timestamp( - seconds=base_timestamp.seconds, - nanos=base_timestamp.nanos + i - ) + total_nanos = base_timestamp.nanos + i + chunk_valid_start = timestamp_pb2.Timestamp( + seconds=base_timestamp.seconds + total_nanos // 1_000_000_000, + nanos=total_nanos % 1_000_000_000, + )
470-478:⚠️ Potential issue | 🟠 MajorAdvance
_current_chunk_indexwhile computingbody_size_all_chunks.
body_sizeslicesself.contentsusing_current_chunk_index, but this loop only swapstransaction_id. Every entry is therefore measured against the same chunk payload, so a shorter final chunk reports the wrong size.🛠️ Proposed fix
- for transaction_id in self._transaction_ids: - self.transaction_id = transaction_id - sizes.append(self.body_size) - - self._current_chunk_index = original_index - self.transaction_id = original_transaction_id - return sizes + try: + for index, transaction_id in enumerate(self._transaction_ids): + self._current_chunk_index = index + self.transaction_id = transaction_id + sizes.append(self.body_size) + return sizes + finally: + self._current_chunk_index = original_index + self.transaction_id = original_transaction_idsrc/hiero_sdk_python/consensus/topic_message_submit_transaction.py (1)
434-442:⚠️ Potential issue | 🟠 MajorMAJOR:
body_size_all_chunksnever advances_current_index.Both the message slice and
chunkInfo.numberdepend on_current_index, but this loop only swapstransaction_id. As written, every entry is measured against the same chunk, so the returned size list is wrong for multi-chunk messages.🛠️ Proposed fix
- for transaction_id in self._transaction_ids: - self.transaction_id = transaction_id - sizes.append(self.body_size) - - self._current_index = original_index - self.transaction_id = original_transaction_id - return sizes + try: + for index, transaction_id in enumerate(self._transaction_ids): + self._current_index = index + self.transaction_id = transaction_id + sizes.append(self.body_size) + return sizes + finally: + self._current_index = original_index + self.transaction_id = original_transaction_idtests/integration/file_append_transaction_e2e_test.py (1)
328-339:⚠️ Potential issue | 🟠 MajorThis "manual freeze" test still exercises
freeze_with().The test name says manual freeze, but it uses
freeze_with(env.client)and leaves a debugprint()/ commented sign call behind. That misses the regression surface this PR is trying to cover.As per coding guidelines: `Tests should be end-to-end: construct → freeze → sign → execute → verify`.🛠️ Proposed fix
+ node_account_id = env.client.network.nodes[0]._account_id tx = ( FileAppendTransaction() .set_file_id(file_id) .set_chunk_size(1024) .set_contents(content) - .freeze_with(env.client) + .set_transaction_id(TransactionId.generate(env.client.operator_account_id)) + .set_node_account_id(node_account_id) + .freeze() ) - - # tx.sign(env.client.operator_private_key) - - print(tx.get_required_chunks()) + tx.sign(env.client.operator_private_key)tests/unit/transaction_test.py (1)
282-406:⚠️ Potential issue | 🟠 MajorChunk-sizing tests still miss high-risk regressions
These cases can still pass with incorrect implementations: Lines 285 and 322 use exact multiples (no short final chunk), and Line 376 relies on implicit defaults (without explicit
set_chunk_size, chunking may not occur at all). Add at least one non-multiple payload and assert the last chunk body size differs.Suggested test hardening
def test_file_append_chunk_tx_should_return_list_of_body_sizes(file_id, account_id, transaction_id): """Test file append tx should return array of body sizes for multi-chunk transaction.""" chunk_size = 1024 - content = "a" * (chunk_size * 3) + content = "a" * (chunk_size * 2 + 137) # force a short final chunk @@ sizes = tx.body_size_all_chunks assert isinstance(sizes, list) assert len(sizes) == 3 + assert all(isinstance(s, int) and s > 0 for s in sizes) + assert sizes[-1] < sizes[0], "Final chunk should be smaller for non-multiple payload" def test_message_submit_chunk_tx_should_return_list_of_body_sizes(topic_id, account_id, transaction_id): @@ chunk_size = 1024 - message= "a" * (chunk_size * 3) + message = "a" * (chunk_size * 2 + 137) @@ sizes = tx.body_size_all_chunks assert isinstance(sizes, list) assert len(sizes) == 3 + assert all(isinstance(s, int) and s > 0 for s in sizes) + assert sizes[-1] < sizes[0], "Final chunk should be smaller for non-multiple payload" def test_chunked_tx_return_proper_sizes(file_id, account_id, transaction_id): @@ large_tx = ( FileAppendTransaction() .set_file_id(file_id) + .set_chunk_size(1024) .set_contents(large_content) @@ ) + assert len(large_tx.body_size_all_chunks) > 1, "Expected multi-chunk transaction" @@ small_tx = ( FileAppendTransaction() .set_file_id(file_id) + .set_chunk_size(1024) .set_contents(small_content)As per coding guidelines:
Unit tests should be extensive - test even if we don't expect users to use it currently.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 5c5fae07-ce46-46cd-8e61-093dfc735866
📒 Files selected for processing (9)
CHANGELOG.mdsrc/hiero_sdk_python/consensus/topic_message_submit_transaction.pysrc/hiero_sdk_python/file/file_append_transaction.pysrc/hiero_sdk_python/transaction/transaction.pytests/integration/file_append_transaction_e2e_test.pytests/integration/topic_message_submit_transaction_e2e_test.pytests/unit/mock_server.pytests/unit/topic_message_submit_transaction_test.pytests/unit/transaction_test.py
| def test_topic_submit_message_raises_error_on_freeze(topic_id): | ||
| """Test transaction raises error on freeze when the transaction_id and node_id not set""" | ||
| tx = ( | ||
| TopicMessageSubmitTransaction() | ||
| .set_topic_id(topic_id) | ||
| .set_message("Hello Hiero") | ||
| ) | ||
|
|
||
| with pytest.raises(ValueError): | ||
| tx.freeze() |
There was a problem hiding this comment.
Match the expected failure reason in this test.
A bare pytest.raises(ValueError) will also pass on unrelated validation failures, so this doesn't pin the new manual-freeze contract. Please match the missing-transaction-id error text here, and ideally add the sibling missing-node-id case as well.
🛠️ Proposed fix
- with pytest.raises(ValueError):
+ with pytest.raises(
+ ValueError,
+ match="Transaction ID must be set before freezing",
+ ):
tx.freeze()There was a problem hiding this comment.
Actionable comments posted: 5
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 70d640d3-78dc-4545-a5c8-db16b77403d8
📒 Files selected for processing (3)
src/hiero_sdk_python/transaction/transaction.pytests/integration/file_append_transaction_e2e_test.pytests/unit/mock_server.py
a9fef8c to
31ddd71
Compare
| return self._make_request().ByteSize() | ||
|
|
||
| @property | ||
| def body_size(self) -> int: |
There was a problem hiding this comment.
should we use the frozen bytes here?
self._transaction_body_bytes
There was a problem hiding this comment.
we can use that, but in chuck tx the frozen bytes is only valid for a single chunk. Since each chunk will has a unique message slice and chunk metadata, we must force a rebuild of the transaction body for each index to get a measurement of every chunks size.
aceppaluni
left a comment
There was a problem hiding this comment.
@manishdait
I had a question about body_size_all_chunks.
While the state is restored at the end, I'm concerned about a potential issue:
Exception handling: If an exception occurs during the loop iteration, the state restoration code won't execute, potentially leaving the transaction in an inconsistent state. Should we consider using a try/finally block here?
What do you think, Should we consider
- Adding a try/finally block to ensure state is always restored? OR
- Adding a prominent comment or docstring warning about this side effect?
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
@aceppaluni, Thank for the review, I think try/finally will be a good choice |
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
041378b to
4ff6cd6
Compare
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
Description:
This PR introduces new helper properties to allow calculating transaction size prior to submission.
Added Properties
size: Returns the total encoded transaction size in bytes (including signatures).body_size: Returns the encodedTransactionBodysize in bytes.body_size_all_chunks: Returns a list of encoded body sizes for each chunk in multi-chunk transactions (e.g.,FileAppendTransaction,TopicMessageSubmitTransaction).Changes Made
Added the new
@propertyhelpers:sizebody_sizebody_size_all_chunksRefactored freeze() to properly support chunked transaction size calculations
Fixed an issue in
FileAppendTransaction.execute()cause due to improper indentationAdded comprehensive unit and e2e tests
Related issue(s):
Fixes #1794
Notes for reviewer:
Checklist