You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Today we eagerly delete all invalidated outputs, and then incrementally re-output them during the build.
This causes problems for the analyzer (or any other tool watching the file system), because of the large number of file updates over a long period of time. It will end up re-analyzing the same files multiple times as their dependencies are written, and it also causes spurious errors when the build starts, until it completes.
This leads to a bad IDE experience for users, especially in large projects with long builds.
Solution
The general idea is to delay all file system interactions until a build is complete, and then do the minimal amount of file system interactions possible. If a file was invalidated but re-written to contain the same content, we shouldn't touch it at all. If it changed, we should overwrite directly the existing file, etc.
High level design
Add a new lifecycle method to RunnerAssetWriter, onBuildComplete or similar, which is called when a build finishes. Most implementations will just delegate the call to any other writer they wrap, or possibly call super if they extend another writer.
Add a new DelayedAssetWriter which only records actions. It wraps another writer, but does not delegate calls to it. This will capture both the pending deletes as well as pending writes as they happen. It will need to cache the bytes of the writes. It will implement onBuildComplete and perform all the actual file system interactions. This class will also need to take an AssetReader in its constructor, if we want it to be able to compare digests and be smart about not doing unnecessary writes.
We will also need a new AssetReader implementation, which has a field which is this DelayedAssetWriter, and it will need to use that cache to read files from. Some more design will need to be done to figure out exactly the best place to slot that reader in.
Risks
Cache Size
This strategy requires all files written in a given build to fit in memory together, with an unbounded cache size. Some potential options we could consider:
Have a bounded sized cache, and write files in batches if the cache starts getting full.
Only use this strategy for Dart files, and not other files. Some other tools might not get the benefit, but it would still fix the analyzer problem.
Only use this strategy for files under a certain size.
Could be breaking for some builders
If a builder today calls out to an external process which reads from the file system, it won't see updated files
These builders are already encouraged to use package:scratch_space, which won't have this problem. All our own builders do this.
The text was updated successfully, but these errors were encountered:
Do we already have a way that written assets are communicated through to the AssetReader, or are we going through the files system always today?
I think we should consider any flags related to this as experimental until we nail down which heuristics are actually worthwhile in practice. I don't want to be stuck with multiple strategies to tweak cache size if we find out that our first attempt doesn't work well - so we should communicate clearly with experiment in flag names.
Do we already have a way that written assets are communicated through to the AssetReader, or are we going through the files system always today?
I don't believe that we communicate between the reader/writer today, we will end up reading the file from disk even if it was just written, and don't cache it in memory.
Problem
Today we eagerly delete all invalidated outputs, and then incrementally re-output them during the build.
This causes problems for the analyzer (or any other tool watching the file system), because of the large number of file updates over a long period of time. It will end up re-analyzing the same files multiple times as their dependencies are written, and it also causes spurious errors when the build starts, until it completes.
This leads to a bad IDE experience for users, especially in large projects with long builds.
Solution
The general idea is to delay all file system interactions until a build is complete, and then do the minimal amount of file system interactions possible. If a file was invalidated but re-written to contain the same content, we shouldn't touch it at all. If it changed, we should overwrite directly the existing file, etc.
High level design
Add a new lifecycle method to
RunnerAssetWriter
,onBuildComplete
or similar, which is called when a build finishes. Most implementations will just delegate the call to any other writer they wrap, or possibly call super if they extend another writer.Add a new
DelayedAssetWriter
which only records actions. It wraps another writer, but does not delegate calls to it. This will capture both the pending deletes as well as pending writes as they happen. It will need to cache the bytes of the writes. It will implementonBuildComplete
and perform all the actual file system interactions. This class will also need to take an AssetReader in its constructor, if we want it to be able to compare digests and be smart about not doing unnecessary writes.We will also need a new
AssetReader
implementation, which has a field which is thisDelayedAssetWriter
, and it will need to use that cache to read files from. Some more design will need to be done to figure out exactly the best place to slot that reader in.Risks
Cache Size
This strategy requires all files written in a given build to fit in memory together, with an unbounded cache size. Some potential options we could consider:
Could be breaking for some builders
If a builder today calls out to an external process which reads from the file system, it won't see updated files
package:scratch_space
, which won't have this problem. All our own builders do this.The text was updated successfully, but these errors were encountered: