Autosave and Saving need to be paralleled

After adding the foundry camera solver there has been an immediate issue brought up whenever hitfilm autosaves or I manually save. As hitfilm's save function requires that all processes other than the render engine to stop, data from the camera solver need to be saved as well. This can be ALOT of data and the time to save is directly reflective off it. A single-camera solve can cause save times upwards of 15 seconds. Which is a nuisance if you plan on having an autosave interval of 1-5 minutes. It only gets worse from there. It frustrates me especially when I am sharing my progress with a friend and "ITS AUTOSAVE TIME!!!" followed by an awkward pause as we listen to an audio stream with a frozen screen. Therefore I propose that the save function be paralleled so that there is no impact on editing at all while it happens. I dont know how difficult this is to implement or if its even possible. But I would like some optimization to the save system. 

Testing  was done on system with specs: CPU: R9 3900x, GTX1070, (editing drive - ADATA500gb NVME), 32GB RAM, Windows 10.

Tracking settings: # of features: 150, total tracking frames: 1119, Clip format: 4k 60fps.  

Total project file size: 53.3MB 

Save time: 8.5 seconds 

Comments

  • edited September 13

    I don't think you can async the save function. The data you are attempting to save needs to be atomic, so you cannot allow that to be altered in any way.

    This was discussed in another thread but the tracking data is the, mother of all data bloats. As can be seen by the resulting projects with a simple simple track contained. The plain text XML has awful bloat for a simple tracking number position. There should be some mechanisms, as discussed in the other thread, which would alleviate that.

    The Hitfilm file save is quite slow for some reason, for the amount written. Even on an NVMe SSD (TLC). Not sure why. A super quick, one project, one run, 10 second track, test case shows reasonable I/O patterns (yeah). The massive block for what is likely the track data is realistically excessive, but not likely harmful.

    It does appear that Hitfilm is forcing a flush of written data to disk. FlushBuffersFile kernel (FileFileBuffers API). This will block the app until the flush commits all pending file I/O to the storage device. That will cause some slowdown. How much? I cannot say in this circumstance. I cannot separate the I/O from the buffer construction. In my experience, using a flush, can cause noticeable delays. Even a seemingly simple single 4 byte write with a flush, and can/apparently cascade to pending I/O in the system. Causing visible stalls. While it seems like a good thing, you don't really gain much, when you really get down to the uglies of how things actually work under the covers. You are just trying to shorten the time window for a power failure affecting your disk write. Not worth the slowdown. A commit to device does not totally protect from power failure during writes. Even simple hard disks have write cache buffers. UPS.

     edit: Looked at the I/O durations and completion times. Seems fine up to the rename file. Nothing big. I'm on an NVMe SSD and a small test case. Still a noticeable blocking delay on save. A spinning disk does get slowed by flushes. Don't know what the delay in UI blocking is about. Looked for something obvious. Not obvious.

     

  • If anything I would think that it would be plausible to move the tracking data to a separate file. So that the save doesn't have to be the one holding it. 

  • The other solution would be to switch to a Database model, like Davinci Resolve and other data intensive applications. Then changes would be saved to the DB and only the changes, not all data every time, even when it hasn't changed, as with saving a file. DB's usually also support transaction-based updates to avoid conflicts, and backup recovery.

    However this would probably imply a (very) major change to the SW design

  • Triem23Triem23 Moderator

    @pinthenet any kind of sidecar file, like @Erosion139 posits, would be a major change to the software design. 

  • I would disagree that a sidecar is a major design change.

    Your writing the project file, you come to the tracking data. Now you write it in the same file. Change would be to write it in a sidecar file. Minor change. Reading is the inverse. Again minor. Data integrity is a concern with two or more files describing a single entity (project). Is the tracking data sidecar the same data written at the time of the project write. Short story, there are obvious trivial ways to handle this. Certainly not tamper proof. If someone wants to tamper they can. Just like they can tamper with a project file itself.

    The data bloat is seriously stupid. Somebody has to give a damn about such things. I'm not sure that dataset size is the source of the delay. At least not completely. It is be a very real problem for spindle hard disks, or arrays of such. On my test with an NVMe SSD (TLC) and a 16MB project file (1 1080p30 media file, 1 comp, 10 second track, no solve), the data set write should still be instant, and seems to be via API logs (very brief looksee),  and there definitely is a UI block for some seconds. Not terrible times like the OP talked of, but they had a real project with more stuff than my trivially simple test case. So the data size is one thing but there may be something else.

    The tracking data size is an obvious thing. Plain text XML is just an excuse. I'll bet the tag structure could be done more efficiently. In another thread I supposed some other structures. No excuses. Would that be more work to write/read than what we have. Maybe/probably. No excuses. Just give a damn. In a previous thread on this topic, I talked of a direct app where we looked for a removing redundancy, and thus dataset size reduction, and gaining a huge performance gain. The CPU to I/O ratio was lower then than it is now.

     

  • I used to use boujou to do my camera tracking. Of course this meant it always created a separate file for hitfilm to read. I dont know if its due to a license agreement that the foundry tracker has no way to export its tracked data to a standard camera track format. Afaik premier's foundry integration is the same. That may be why they are saving it to the hitfilm specific save file as a means to prevent the use of the tracker in other software.  

  • Triem23Triem23 Moderator

    @Erosion139 yes, I agree with that - in fact, that's what I was about to type as an expansion of my prior comment (which was a "just before bed" line. 

    @NormanPCN so, yeah, "major change" more being about FXHOME and Foundry controlling the tracking data by burying it in the project file. Still dunno the Express cost for the add-on, but, whatever it is, it's going to be less than Foundry's other solutions or competitor products. At a guess, a sidecar would be easier to "crack" to pull the data into other software. I think HF is trying to avoid that.

     

  • On a side note, has anyone else been having issues with mocha and caching the video file? When I play it through to load it into ram it just refuses to fill my RAM beyond 60% or so. I have the max memory set to 99%. I was thinking it was due to standby memory which tends to climb enough to fill the rest of my memory. But I cannot get consistent results. It just seems to not want to for no reason. 

  • ""major change" more being about FXHOME and Foundry controlling the tracking data by burying it in the project file."

    I doubt Foundry has any say in the matter of saving/reading tracking data to/from a Hitfilm project file. The plug-in just gives the binary data to the host and the host decides what if anything to do with it.

    "At a guess, a sidecar would be easier to "crack" to pull the data into other software."

    The sidecar would likely be saved in the exact same structure as it is saved right now in the project file. FxHome chooses to save in plain text XML, with a tag structure of their design. A sidecar is not going to change this decision. With a sidecar the same data is just in the house next door. Just as easy to crack.

  • After adding only a few more camera tracks it has become such a pain to have the autosave start that I just end up getting distracted by something else and when it finishes I cant remember what I was doing XD

  • edited September 15

    Save time is now 23 seconds. YIKES! On top of that, I cant tell when it crashed or if its autosaving because I just get the same hang. This is genuinely making my life so much harder. I really hope that premier doesn't have this problem. Because this is infuriating. 

  • edited September 15

    Yikes is a kind word for it.

    @Erosion139 What is the size of the project file for that 23 second save? Was that an auto save? If autosave, does a straight save take as long (UI blocked until save is done)? Is the location of the Hitfilm auto save files going onto your NVMe SSD or to a spindle HD, or regular SATA SSD.

    I only looked at a straight save, assuming the same code is being used. Just different output target file(s). I might look at an autosave to see if anything different is going on, but I still see a UI block even on a normal save. I noticed the typical save algorithm. Save to a temp file and then on a successful save then rename the temp to the target file name. I saw the temp in the same folder as the project file.

    ---

    Linking the other previous thread on this topic.

  • I triggered a manual save as the autosave takes a similar time if not exact. The save destination is the NVME SSD (which I see just about no usage of while saving) the UI is completely blocked out when it saves. The file size is now 247MB

Sign In or Register to comment.