Unfortunately the only way to extract logs from a grid node server is through the standard logging options, which are disabled by default.
It's not really a surprise to me that the node had trouble coping with an out of disk space scenario. The error handling in this area is very light, so there could be any number of ways such an issue could result in the node landing in a disfunctional state. If the disk itself was reset containing workspaces from the node, things would definitely go horribly wrong.
In theory, a restart of the grid node service (either directly on the VM or via the NCrunch restart option in the Distributed Processing Window) should solve this problem. The test runner process will detect the absence of its host process and should self-terminate.