-
Notifications
You must be signed in to change notification settings - Fork 344
Panic in SLOAD when reloading the chain after it died or during normal chain operation #626
Comments
Following @benjaminbollen's suggestion, I removed this commit: 288737d . It seems to make the problems go away. Not really conclusive at this stage, but strong indication that this commit introduces a regression. |
@ratranqu are you able to share the solidity that caused this? Or a minimal reproduction in solidity? |
Also are you running any transactions that are calling It looks like we have an attempt to access the storage of a destructed contract. Off the top of my head I don't think that makes sense in terms of the EVM semantics, though I could be wrong. That being the case we may be seeing a non-deterministic reordering of transactions during the commit phase or something like that, though I need to dig a bit more. |
Are you consistently seeing this issue with 288737d and not without? From the IAVL tree perspective I suspect that a node is getting orphaned and removed by one tree that shares a node db with another. Again we'll really need a working test case to look into this further. |
It's worth noting that the panic originates outside of the BlockCache in go-merkle. go-merkle doesn't expect a dangling node pointer to ever occur (whether than is reasonable on go-merkle's account given it allows |
There are two factors that play here: the updates to go-merkle and the updates to BlockCache; as described in go-merkle, the likely solution is to construct a new BlockCache after committing to the database |
thx guys for looking into it. @silasdavis: there are no explicit self-destruct in the solidity code, and yes I am consistently seeing this issue with 288737d and not without. |
That may be a solution, but I don't yet see how this is being triggered. I would like to fix against a clear failing test case. Are you able to share some solidity code that caused this so we can reproduce? |
@ratranqu any chance you could provide us with some solidity/usage that reproduces this issue? We really don't have enough to go on just from stacktrace without being able to get into the same initial state and with knowing what kind of requests are hitting the instance. A minimal test case would be much appreciate. Or failing that if you could share some code in private and we could try to come up with one. |
@ratranqu bump, can you provide any more detailed reproduction on this? |
@silasdavis, unfortunately, not really. I still have the full state of the blockchain where it dies, however, the way I was getting it was by calling solidity code from a swift binary, neither of which are practical to share.
I have not had time to isolate more and do not foresee to have time to do so in the near future.
I can share the full state of the blockchain if that helps, where you could identify why it dies during the replay?
I’m fine to close the bug as well if I’m the only one affected by this.
… On 12 Sep 2017, at 13:17, Silas Davis ***@***.***> wrote:
@ratranqu <https://github.com/ratranqu> bump, can you provide any more detailed reproduction on this?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub <#626 (comment)>, or mute the thread <https://github.com/notifications/unsubscribe-auth/AQPPLeWlum0chBigyLeSq45fDidbZVgEks5shmhCgaJpZM4ObQQI>.
|
I can appreciate the time issue, if you could share the state that would be great (and probably enough for me to progress with it). If you are able to share it here then you should be able to drag and drop a .zip/.gz into the comment box and it will upload. Failing that you could send it to me at .iosilas@monax (rotate left 3). |
this has all changed dramatically. closing for now. |
Please include in your bug report:
burrow version
(docker image tag or branch if built from source)develop branch
Panic on SLOAD when rebuilding the chain or during normal operation of the chain (it just happens).
burrow serve --work-dir /home/ubuntu/decpub/single -d
If issue is a feature request, tell us why this feature is useful.
Last op and stack trace below:
--
The text was updated successfully, but these errors were encountered: