Skip to content

Add support for generating a Make compatible depfile#41

Open
dcbaker wants to merge 3 commits intoaradi:mainfrom
dcbaker:depfile
Open

Add support for generating a Make compatible depfile#41
dcbaker wants to merge 3 commits intoaradi:mainfrom
dcbaker:depfile

Conversation

@dcbaker
Copy link
Copy Markdown

@dcbaker dcbaker commented Mar 29, 2024

This allows fypp to communicate to build systems like Make and Ninja (including through build system generators like CMake and Meson) files that need to be implicitly included in the build graph. This is important for getting correct incremental rebuilds when such a file is changed.

Copy link
Copy Markdown
Owner

@aradi aradi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a very nice addition, thanks! The only thing I am somewhat unsure about is the escaping, see my comments below.

Comment thread bin/fypp Outdated
Comment thread bin/fypp Outdated
Comment thread bin/fypp Outdated
Comment thread test/test_fypp.py Outdated
Comment thread bin/fypp Outdated
@dcbaker
Copy link
Copy Markdown
Author

dcbaker commented Apr 8, 2024

Updated with the requested changes. I've also updated the test case, and added a couple of more cases, including one to explicitly test the escaping behavior.

This allows executing the unittests directly with `python test_fypp.py`,
which allowing easily passing options into the test harness. It also
avoids having to know if user has set PYTHONPATH, since it will be
evaluated after PYTHONPATH.
This allows build systems that can consume them (Make and Ninja, in
particular) to figure out at build time which files are implicitly
included in the dependency graph of a fypp output.

This means for example if you have a target that includes two other
files, an incremental rebuild can be triggered automatically if one of
the included files are modified.
@dcbaker
Copy link
Copy Markdown
Author

dcbaker commented Sep 11, 2025

I've rebased this across the recent code reorganization

@aradi
Copy link
Copy Markdown
Owner

aradi commented Oct 29, 2025

OK, perfect (and sorry for the long delay times). I think everything is fine so far, and the PR should be merged.

I have one only API design question: What is the advantage of not writing the depfile already during the process_file() call, but requiring a separate write_dependencies() call. At latter, you have to pass the name of the depfile again, which had been already passed via the comman line option --depfile.

At the first glance, it seems more logical to me, to write the depfile during the process_file() call and not having the extra write_dependencies() method. Any views on this?

@ijatinydv
Copy link
Copy Markdown

Hey, I've been going through this PR pretty carefully because I'm working on a GSoC proposal to add incremental preprocessing to fpm, and this feature is exactly what I need for tracking #:include dependencies correctly. Wanted to try to help move things forward since this has been sitting for a while.
On @aradi API question :
I think folding it into process_file() is the right call. The way it works now, outfile gets passed twice and there's nothing stopping someone from calling write_dependencies() before process_file(), the API just silently gives wrong results. The integrated approach also matches how GCC's -MMD works, which is probably what most build system users already expect.

Something like this:

def process_file(self, infile, outfile=None):
    # ... existing processing ...
    if self._options.depfile and outfile and outfile != '-':
        self.write_dependencies(outfile, self._options.depfile)

write_dependencies() and get_dependencies() can stay public for anyone who needs finer control, but the common case becomes automatic.

One bug I noticed while reading:
Parser._included_files gets initialised in __init__ but never resets between calls. So if anyone reuses a Fypp instance across multiple files, the dependency list accumulates stale entries from previous runs:

tool = Fypp()
tool.process_file('a.fypp', 'a.f90')  # _included_files = ['x.inc']
tool.process_file('b.fypp', 'b.f90')  # _included_files = ['x.inc', 'y.inc']  ← x.inc is stale

CLI users won't hit this because run_fypp() creates a fresh instance every time, but fpm would be calling the API directly and reusing the instance, so this would actually affect my use case. Fix would be adding self._included_files = [] at the top of Parser.parsefile() and Parser.parse(), similar to how Builder.reset() already handles this for the builder.
I'm happy to open a small follow-up PR with both of these if that makes it easier to get this merged. Thanks for the work on this, it's a genuinely useful feature.

@aradi
Copy link
Copy Markdown
Owner

aradi commented Mar 20, 2026

Dear @ijatinydv , thanks for reviving this issue. Definitely interested in the feature, so feel free to make PRs, I agree on both points. (I'll be away from keyboard for somewhat more than a week, though, so my response times might become somewhat longish again.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants