This needs to be framed somewhere, very lucid explanation
viega 2 days ago [-]
Wow, I'm not sure I've ever seen this (or if I did, it was 20 years ago).
And I was definitely looking around for this kind of history when I was searching around when writing. Perhaps my google skills have decayed... or google... or both!
Thanks very much.
kragen 1 days ago [-]
Oh, are you L33 T.?
viega 1 days ago [-]
If my google fu is that bad, it's pretty much impossible to be.
kragen 24 hours ago [-]
Oh, I understood you to be saying you'd written this article.
Joker_vD 1 days ago [-]
Honestly, it feels like something like this should have been put in the standard instead of all the English prose that ended in the section about the preprocessor expansion. Yeah, it's not pretty, but at least it requires way less skill in hermeneutics to understand correctly.
kragen 1 days ago [-]
Thank you very much for providing these links!
kragen 1 days ago [-]
I think the C preprocessor was designed after the GPM clone m6, its successor m4, and Ratfor, so I suspect the difficulty in doing things like this is intentional. I guess I should ask McIlroy, who is responsible for pushing m4 to its absolute limits and was present when the C preprocessor was being designed: https://www.cs.dartmouth.edu/~doug/barem4.m4
_ Pure macros as a programming language
_
_ m4 is Turing complete even when stripped to the bare minimum
_ of one builtin: `define'. This is not news; Christopher
_ Strachey demonstrated it in his ancestral GPM, described in
_ "A general- purpose macrogenerator", The Computer Journal 8
_ (1965) 225-241.
_
_ This m4 program more fully illustrates universality by
_ building familiar programming capabilities: unlimited
_ precision integer arithmetic, boolean algebra, conditional
_ execution, case-switching, and some higher-level operators
_ from functional programming. In support of these normal
_ facilities, however, the program exploits some unusual
_ programming idioms:
_
_ 1. Case-switching via macro names constructed on the fly.
_ 2. Equality testing by redefining macros.
_ 3. Representing data structures by nested parenthesized lists.
_ 4. Using macros as associative memory.
_ 5. Inserting nested parameter symbols on the fly.
_
_ Idioms 2 and 5 are "reflective": the program writes code
_ for itself.
It's very easy to get into enormous amounts of trouble in m4, m6, or GPM. The C preprocessor is not without its problems, but it is rare that I have difficulty in understanding why a given gcc -E invocation produces the output it does.
There you can find a recursive macro expansion implementation (as a gcc hack) that fits on a slide:
#2""3
#define PRAGMA(...) _Pragma(#__VA_ARGS__)
#define REVIVE(m) PRAGMA(push_macro(#m))PRAGMA(pop_macro(#m))
#define DEC(n,...) (__VA_ARGS__)
#define FX(f,x) REVIVE(FX) f x
#define HOW_MANY_ARGS(...) REVIVE(HOW_MANY_ARGS) \
__VA_OPT__(+1 FX(HOW_MANY_ARGS, DEC(__VA_ARGS__)))
int main () {
printf("%i", HOW_MANY_ARGS(1,2,3,4,5)); // 5
}
It sounds like the one in the article works for more compilers, but there doesn't seem to be a copy-pasteable example anywhere to check for myself. Also, the "Our GitHub Org" link on the site just links to github.com.
viega 2 days ago [-]
Author of the article here.
Absolutely, the code box under the ascii art is a complete implementation, just paste that in a C file, and then use `H4X0R_VA_COUNT(...)`.
Or, you could follow the link the my typed variadic arguments article (from which this post forked off). The repo there is: https://codeberg.org/h4x0r/vargs
viega 2 days ago [-]
And yes, GCC extensions are often going to be adopted in clang, but generally not the broader world of C and C++ compilers. Everything in my article conforms to the standard.
fuhsnn 2 days ago [-]
I played with a lot of preprocessor implementations and did my own (redesigned chibicc's expansion algorithm), not many of them even have paint-blue behavior exactly right (the standard text is vague, to me it was more "matching GCC's" than "conforming to standard").
viega 2 days ago [-]
That's interesting. I agree with you that the standards text is pretty vague. I think that's why other attempts to show how to do this kind of thing don't get deep enough on the semantics, and why I adopted a "try it and see" strategy.
I do try to avoid this kind of thing unless necessary, so I don't have experience as to where the different compilers will fall down on different corner cases. I'd find it very interesting though, so please do share if you kept any record or have any memory!
fuhsnn 14 hours ago [-]
Examples from this article[1] frequently yield inconsistent result across implementations, particularly the ones that put parenthesis in macros, it is indeed a very corner-case-y thing to do though.
It seems that MSVC doesn't like those macros, though.
david2ndaccount 2 days ago [-]
Works with MSVC if you add /Zc:preprocessor (to get a standard compliant preprocessor instead of the legacy one).
Sharlin 1 days ago [-]
You know you're in for a wild ride when the `do { ... } while(0)` hack isn't even on the iceberg.
viega 1 days ago [-]
It is-- if you click in to their full list, you should see it near the top in their "above the water" section, under `#pragma once`. I suspect it was added after the meme image was produced.
But, if it weren't on the iceberg page, it'd make sense. The semantics of `do { ... } while(0)` are in the standard, and the preprocessor has nothing to do with those semantics.
You are, of course, right, that the construct is used all over the place in macros for good reason, though these days every compiler I care about has the same (I believe not in the standard) syntax for statement expressions, which will work in even more contexts.
Sharlin 1 days ago [-]
Ah, I meant that it was in the uppermost, least obscure section, "above the iceberg", not that it wasn't in the picture at all :)
One can also (ab)use the build system to run arbitrary preprocessing steps with any language over the "C" input. You can have recursive macros by using M4 or Perl or Python or some other language to expand them, converting your "foo.c.in" into a "foo.c" to hand off to the C preprocessor & compiler. It still feels dirty, but it's often much easier to understand & debug.
viega 1 days ago [-]
Yes, 100%. And since CPP doesn't actually understand C, it's not too hard to do some lightweight preprocessing that requires some real additional parsing.
But while CPP is pretty finicky and not very modern, getting such things working seamlessly with C build systems can be vastly worse (though better than the days where the GNU tools were ubiquitous).
I tend to find meson easy to use compared to all the others, and do this kind of thing, but it's still difficult and brittle.
fuhsnn 2 days ago [-]
I wonder if the author is aware of the __VA_TAIL__ proposal[1], it covered similar grounds and IMO very well thought out, but unfortunately not accepted into C2Y (judging from committee meeting minutes).
Yes, I know that it was not accepted, but do not have any color on why not. It's well thought out; but I do not think the semantics are self-evident to the average C programmer who already finds the preprocessor inscrutable.
0x69420 1 days ago [-]
genuinely remarkable, the altogether perhaps even productive mischief you can get up to, especially with `__VA_OPT__` becoming a proper standard in both C and C++ so you don't have to feel dirty about using it.
i recently made use of plenty of ugly tricks in this vein to take a single authoritative table of macro invocations that defined a bunch of pixel formats, and make them graduate from defining bitfield structs to classes with accessors that performed good old fashioned shifts and masks, all without ever specifying the individual bit offsets of channels, just their individual widths, and macro magic did the rest. no templates, no actual c++, could just as feasibly produce pure c bindings down the line by just changing a few names.
getting really into this stuff makes you stop thinking of c function-like macros as functions of their arguments as such, but rather unary functions of argument lists, where arity roughly becomes the one notion vaguely akin to typing in the whole enterprise, or at least the one place where the compiler exhibits behaviour resembling that of a type checker. this was especially true considering the entries in the table i wound up with were variadic, terminating in variably many (name, width) parenthesised tuples. and i just... had the means to "uncons" them so to speak. fun stuff.
this is worth it, imo, in precisely one context, which is: you want a single source of truth that defines fiddly but formulaic implementations spread across multiple files that must remain coordinated, and this is something you do infrequently enough that you don't consider it worthwhile introducing "real" "big boy" code gen into your build process. mind, you usually do end up having to commit to a little utility header that defines convenient macros (_Ex and such in the article), but hey. c'est la vie. basically x macros (https://en.wikipedia.org/wiki/X_macro) on heart attack quantities of steroids.
procaryote 2 days ago [-]
In many ways being limited ends up being a feature. Even limited as it is, you get some crimes against humanity like the bourne shell source, but at least most people agree it is a bad idea
If it allowed more unlimited metaprogramming, building big complex things as macros might well have become popular
lpribis 2 days ago [-]
The CPP does allow quite advanced metaprogramming, it's just so obtuse to use and requires insane hacks so almost nobody does. See one of my favorite projects https://github.com/hirrolot/metalang99
MangoToupe 1 days ago [-]
The CPP allows “advanced metaprogramming” like throating a banana can sometimes sound like speech
pjsg 2 days ago [-]
I wept when the author mentioned implementing SHA256 in macros.
camel-cdr 2 days ago [-]
Here is something similar: https://godbolt.org/z/Yj61b6GGj
I useed a non-cryptographic PRNG to write a C program that only compiles if you know the correct key.
viega 2 days ago [-]
LOL, I suffered so you didn't have to.
jhallenworld 1 days ago [-]
The lack of (easy) recursion in CPP is so frustrating because it was always available in assembly languages with even very old and very simple macro assemblers- with the caveat that the recursion depth was often very limited, and no tail call elimination. For example, if you need to fill memory:
; Fill memory with backward sequence
macro fill n
word n
if n != 0
fill n - 1
endif
endm
So "fill 3" expands to:
word 3
word 2
word 1
word 0
There is no way this was not known about when C was created. They must have been burned by recursive macro abuse and banned it (perhaps from m4 experience as others have said).
The other assembly language feature that I missed is the ability to switch sections. This is useful for building tables in a distributed fashion. Luckily you can do it with gcc.
bluGill 1 days ago [-]
I've ready the article 4 times already today and I'm still crying. This looks like the solution to a problem I'm having (C++, but I'm doing things that templates and constexpr can't do), but trying to get it all to work is painful. Kudos to the author at making an attempt to explain it.
viega 1 days ago [-]
If you can give me specifics on how it's not clear, I'd very much want to improve it. Please DM me about it.
bluGill 1 days ago [-]
Not so much that you are not clear - this is an area that is weird enough that I don't think it is possible to be clear, but you gave me some useful hints. I got some of what I want working.
Edit, I started writing and then realized that I need this to work on old gcc that doesn't support these tricks so I have to give up. (I can't wait until this old embedded hardware dies)
What I'm trying to do is #define(A, B) static int A ## _ ## B = register(MKSTRING(A ## _ ## B) take any number of arguments. I can get the register working, but my attempts are creating the variable name fail.
viega 1 days ago [-]
Thanks. Also, per another person in the thread, here are the two annotations he or she was asking for, heavily annotated.
They are both only a couple lines, but it deals with things like the fact that you've got one more argument than you should have commas, and the use of the # and ## operators in these things.
Thanks. I didn't have enough levels of indirection. Once I add enough levels of indirection it works. Well works on my newer systems, I have to build for an embedded system where I'm stuck on an old compiler that doesn't support these tricks.
viega 1 days ago [-]
To be clear, the example I provided for the other person explains the bit you're missing where the names aren't working... if you carefully follow the rules, the # and ## operators don't allow expansion on their arguments, so you have to use a layer of indirection to get them expanded first.
Joker_vD 1 days ago [-]
#define _H4X0R_CONVERT_ONE(arg) \
((union { unsigned long long u; void *v; }){ \
.u = (unsigned long long)arg, \
}).v
Funnily enough, the difference between passing ... and locally-allocated void*[] is basically who has to spill the data to the stack, the caller or the called function.
viega 1 days ago [-]
Well, I've done it that way if I'm willing to limit myself to pointers or ints up to a pointer size, but that doesn't work with floats or doubles, for instance.
Ergonomically, I have tended to start using _Generic for static type checking where possible, and that pushes me more toward to avoiding arrays in types for this kind of thing.
imglorp 21 hours ago [-]
> C has many advantages that have led to its longevity (60 years as perhaps the most important language).
53 years by my count. Did something relevant happen in 1960? Maybe author is alluding to B?
tester756 1 days ago [-]
Macro is one of the ugliest features available in langs like C/CPP
WalterBright 2 days ago [-]
Imagine trying to implement the C preprocessor. I had to write it from scratch 3 times before it worked 100%.
viega 2 days ago [-]
Wow, you are a braver person than I. Well done.
WalterBright 2 days ago [-]
Thank you. I am actually perversely proud of it.
le-mark 1 days ago [-]
This was back in 80s when you were working on c compilers? That’s an interesting story (80s compiler scene and what you worked on) I’ve picked up bits and pieces over the years, have you written it up anywhere? Would be benefit for many I think.
No need for anyone else to struggle to implement a preprocessor.
camel-cdr 2 days ago [-]
microsoft fixed their broken c preprocessor implementation just a few years ago
WalterBright 2 days ago [-]
They could have just asked me :-)
kragen 1 days ago [-]
They were deliberately leaving their C compiler broken because they considered C++ to be the replacement for C.
WalterBright 1 days ago [-]
C++ uses an identical preprocessor.
kragen 24 hours ago [-]
That has sometimes been true, but for example C++ didn't have variadic macros, maybe still doesn't, because they were added to C after Microsoft decided to stop following C development. https://stackoverflow.com/a/21515839 goes into details of other differences.
winocm 2 days ago [-]
Mildly related, sort of, one can prevent expansion of variadic macros as follows:
#define printf(...)
int (printf)(const char *, ...);
I keep on seeing many random code bases just resort to #undef instead...
pwdisswordfishy 2 days ago [-]
Doesn't this trigger warnings?
joriatsy 1 days ago [-]
Function like macros literally requires name( , i.e name followed directly by open paren, otherwise no macro substitution occurs.
so (name)() will always suppress function like macros (but not non-function ones, i.e regular #define name xxx)
I used to write a preprocessor until I noticed those kind of thing...I stopped writing it after that
hyperhello 2 days ago [-]
Can I use this technique to expand MACRO(a,b,c,…) into something like F(a,b,c…); G(a,b,c…)?
viega 1 days ago [-]
Okay, finally found some time to provide you with a fully annotated example of your original ask here, assuming you wanted to transform the arguments passed to F into IDs, and the arguments passed to G into strings (as seemed to be the case from the rest of the thread).
I've fully annotated it, so it might seem like more than it is. About half the macro code is from the original article (the chunk at the top). And I do implement both transforms for you.
Each one I think is only 6 lines of code by itself, despite the rediculous amount of exposition in the comments.
The technique in the article is more often used to type check the individual parameters, or wrap a function call around them individually, etc.
hyperhello 2 days ago [-]
Ok. How about into
F(a,”a”);F(b,”b”);etc.
The problem being automating enums and their names in one call. Like MACRO(a,b,c) and getting a map from a to “a”.
viega 2 days ago [-]
100%, that's definitely easy to do once you understand the technique.
hyperhello 2 days ago [-]
Please?
viega 2 days ago [-]
I'm on my phone, but if you start with the top 8 lines in the code box under the ascii art, you'll get an implementation of `H4X0R_MAP()`; the bottom two lines are an example, and you can just write yourself a body that produces one term. Only thing you need to know beyond that is the stringify operator.
viega 2 days ago [-]
And I should say, if you want to apply the same transformation to arguments twice, but call F() separately from G() per your starting example, you'd just apply your map twice in your top-level macro.
WalterBright 2 days ago [-]
> if you want to apply the same transformation to arguments twice, but call F() > separately from G() per your starting example, you'd just apply your map twice in your top-level macro
My brain just blew a fuse.
russfink 2 days ago [-]
Is this a DoS risk - code that sends your build chain into an infinite loop?
sltkr 2 days ago [-]
From a DoS risk perspective there is no practical difference between an infinite loop, or a finite but arbitrarily large loop, which was always possible.
For example, this doesn't work:
#define DOUBLE(x) DOUBLE(x) DOUBLE(x)
DOUBLE(x)
That would only expand once and then stop because of the rule against repeated expansion. But nothing prevents you from unrolling the first few recursive expansions, e.g.:
#define DOUBLE1(x) x x
#define DOUBLE2(x) DOUBLE1(x) DOUBLE1(x)
#define DOUBLE3(x) DOUBLE2(x) DOUBLE2(x)
#define DOUBLE4(x) DOUBLE3(x) DOUBLE3(x)
DOUBLE4(x)
This will generate 2^4 = 16 copies of x. Add 60 more lines to generate 2^64 copies of x. While 2^64 is technically a finite number, for all practical purposes it might as well be infinite.
kragen 1 days ago [-]
To do this as efficiently as possible, it's probably worthwhile to use a higher radix and shorter macro names. For example:
$ cc -E - <<.
> #define A(x) x x x x x x
> #define B(x) A(x) A(x) A(x) A(x)
> #define C(x) B(x) B(x) B(x) B(x)
> C(Noooooo)
> .
saghm 2 days ago [-]
Without any specific implementation of a constraint it certainly can happen, although I'm not totally sure that it's something to be concerned about in terms of a DOS as much as a nuisance when writing code with a bug in it; if you're including malicious code, there's probably much worse things it could do if it actually builds properly instead of just spinning indefinitely.
Rust's macros are recursive intentionally, and the compiler implements a recursion limit that IIRC defaults to 64, at which point it will error out and mention that you need to increase it with an attribute in the code if you need it to be higher. This isn't just for macros though, as I've seen it get triggered before with the compiler attempting to resolve deeply nested generics, so it seems plausible to me that C compilers might already have some sort of internal check for this. At the very least, C++ templates certainly can get pretty deeply nested, and given that the major C compilers are pretty closely related to their C++ counterparts, maybe this is something that exists in the shared part of the compiler logic.
viega 2 days ago [-]
C++ also has constexpr functions, which can be recursive.
All code can have bugs, error out and die.
There are lots of good reasons to run code at compile time, most commonly to generate code, especially tedious and error-prone code. If the language doesn't have good built-in facilities to do that, then people will write separate programs as part of the build, which adds system complexity, which is, in my experience, worse for C than for most other languages.
If a language can remove that build complexity, and the semantics are clear enough to the average programmer (For example, Nim's macro system which originally were highly appealing (and easy) to me as a compiler guy, until I saw how other people find even simple examples completely opaque-- worse than C macros.
WalterBright 2 days ago [-]
D doesn't have macros, quite deliberately.
What it does have are two features:
1. compile time evaluation of functions - meaning you can write ordinary D code and execute it at compile time, including handling strings
2. a "mixin" statement that has a string as an argument, and the string is compiled as if it were D source code, and that code replaces the mixin statement, and is compiled as usual
Simple and easy.
viega 2 days ago [-]
No. Other modern languages have strong compile-time execution capabilities, including Zig, Rust and C++. And my understanding is that C is looking to move in that direction, though as with C++, macros will not go away.
MangoToupe 2 days ago [-]
The c pre processor. C doesn't have macros. It's fucking miserable. Anyone who uses it is a masochist
procaryote 2 days ago [-]
It's a feature... macros allows people to change the language at will which is great when you're researching programming languages (like in lisp) but less good when you want maintainable and consistent code
C++ has more powerful metaprogramming and look how that turned out
bluGill 1 days ago [-]
At least you can implement things like vector in C++. I challenge anyone to write the same for C macros. (this is semi-serious - if you get a full generic STL for C that is usable that would prevent a large number of bugs. The implementation can be as bad as you want, so long as the user API is not too difficult. Though note that I'm not a C programmer and so if there are better ways someone else is doing this I'm not aware of it)
uecker 1 days ago [-]
I wonder how you like my vector: https://godbolt.org/z/97YGrbP9s
(note, experimental library, but I see no fundamental issue).
bluGill 1 days ago [-]
that is amazing... I don't write C so I didn't dig too deep, but kudos for getting anything to work. Now you use need great documentation so we can figure out how to use it and what it supports without digging into the macros themselves. (tests would be good too, but maybe they are there and I didn't see them).
uecker 15 hours ago [-]
yes, unfortunately I do not have a lot of time... maybe I can some funding, but C work this is difficult.
MangoToupe 2 days ago [-]
No, it is not a feature. It prevents C from being parseable and precludes actual metaprocessing. Just use lisp if you want to write maintainable code
Nobody wants to end up like C++ of course, you ain't wrong about that
Rendered at 21:54:20 GMT+0000 (Coordinated Universal Time) with Vercel.
* https://www.spinellis.gr/blog/20060626/
* https://www.spinellis.gr/pubs/jrnl/2006-DDJ-Finessing/html/S...
* https://gcc.gnu.org/legacy-ml/gcc-prs/2001-q1/msg00495.html
https://marc.info/?l=boost&m=118835769257658&w=2
And I was definitely looking around for this kind of history when I was searching around when writing. Perhaps my google skills have decayed... or google... or both!
Thanks very much.
There you can find a recursive macro expansion implementation (as a gcc hack) that fits on a slide:
It sounds like the one in the article works for more compilers, but there doesn't seem to be a copy-pasteable example anywhere to check for myself. Also, the "Our GitHub Org" link on the site just links to github.com.Absolutely, the code box under the ascii art is a complete implementation, just paste that in a C file, and then use `H4X0R_VA_COUNT(...)`.
Or, you could follow the link the my typed variadic arguments article (from which this post forked off). The repo there is: https://codeberg.org/h4x0r/vargs
I do try to avoid this kind of thing unless necessary, so I don't have experience as to where the different compilers will fall down on different corner cases. I'd find it very interesting though, so please do share if you kept any record or have any memory!
[1] https://www.scs.stanford.edu/~dm/blog/va-opt.html#c-macro-ov...
It seems that MSVC doesn't like those macros, though.
But, if it weren't on the iceberg page, it'd make sense. The semantics of `do { ... } while(0)` are in the standard, and the preprocessor has nothing to do with those semantics.
You are, of course, right, that the construct is used all over the place in macros for good reason, though these days every compiler I care about has the same (I believe not in the standard) syntax for statement expressions, which will work in even more contexts.
But while CPP is pretty finicky and not very modern, getting such things working seamlessly with C build systems can be vastly worse (though better than the days where the GNU tools were ubiquitous).
I tend to find meson easy to use compared to all the others, and do this kind of thing, but it's still difficult and brittle.
[1] https://www.open-std.org/jtc1/sc22/wg14/www/docs/n3307.htm
i recently made use of plenty of ugly tricks in this vein to take a single authoritative table of macro invocations that defined a bunch of pixel formats, and make them graduate from defining bitfield structs to classes with accessors that performed good old fashioned shifts and masks, all without ever specifying the individual bit offsets of channels, just their individual widths, and macro magic did the rest. no templates, no actual c++, could just as feasibly produce pure c bindings down the line by just changing a few names.
getting really into this stuff makes you stop thinking of c function-like macros as functions of their arguments as such, but rather unary functions of argument lists, where arity roughly becomes the one notion vaguely akin to typing in the whole enterprise, or at least the one place where the compiler exhibits behaviour resembling that of a type checker. this was especially true considering the entries in the table i wound up with were variadic, terminating in variably many (name, width) parenthesised tuples. and i just... had the means to "uncons" them so to speak. fun stuff.
this is worth it, imo, in precisely one context, which is: you want a single source of truth that defines fiddly but formulaic implementations spread across multiple files that must remain coordinated, and this is something you do infrequently enough that you don't consider it worthwhile introducing "real" "big boy" code gen into your build process. mind, you usually do end up having to commit to a little utility header that defines convenient macros (_Ex and such in the article), but hey. c'est la vie. basically x macros (https://en.wikipedia.org/wiki/X_macro) on heart attack quantities of steroids.
If it allowed more unlimited metaprogramming, building big complex things as macros might well have become popular
The other assembly language feature that I missed is the ability to switch sections. This is useful for building tables in a distributed fashion. Luckily you can do it with gcc.
Edit, I started writing and then realized that I need this to work on old gcc that doesn't support these tricks so I have to give up. (I can't wait until this old embedded hardware dies)
What I'm trying to do is #define(A, B) static int A ## _ ## B = register(MKSTRING(A ## _ ## B) take any number of arguments. I can get the register working, but my attempts are creating the variable name fail.
They are both only a couple lines, but it deals with things like the fact that you've got one more argument than you should have commas, and the use of the # and ## operators in these things.
https://c.godbolt.org/z/6zqx1dsn3
Also, thanks, now I can finally use
ergonomically: Funnily enough, the difference between passing ... and locally-allocated void*[] is basically who has to spill the data to the stack, the caller or the called function.Ergonomically, I have tended to start using _Generic for static type checking where possible, and that pushes me more toward to avoiding arrays in types for this kind of thing.
53 years by my count. Did something relevant happen in 1960? Maybe author is alluding to B?
It's Boost licensed. Open sourced.
No need for anyone else to struggle to implement a preprocessor.
https://c.godbolt.org/z/6zqx1dsn3
I've fully annotated it, so it might seem like more than it is. About half the macro code is from the original article (the chunk at the top). And I do implement both transforms for you.
Each one I think is only 6 lines of code by itself, despite the rediculous amount of exposition in the comments.
If you have any questions about it, let me know.
``` #define MACRO(...) F(__VA_ARGS__); G(__VA_ARGS__) ```
The technique in the article is more often used to type check the individual parameters, or wrap a function call around them individually, etc.
The problem being automating enums and their names in one call. Like MACRO(a,b,c) and getting a map from a to “a”.
My brain just blew a fuse.
For example, this doesn't work:
That would only expand once and then stop because of the rule against repeated expansion. But nothing prevents you from unrolling the first few recursive expansions, e.g.: This will generate 2^4 = 16 copies of x. Add 60 more lines to generate 2^64 copies of x. While 2^64 is technically a finite number, for all practical purposes it might as well be infinite.Rust's macros are recursive intentionally, and the compiler implements a recursion limit that IIRC defaults to 64, at which point it will error out and mention that you need to increase it with an attribute in the code if you need it to be higher. This isn't just for macros though, as I've seen it get triggered before with the compiler attempting to resolve deeply nested generics, so it seems plausible to me that C compilers might already have some sort of internal check for this. At the very least, C++ templates certainly can get pretty deeply nested, and given that the major C compilers are pretty closely related to their C++ counterparts, maybe this is something that exists in the shared part of the compiler logic.
All code can have bugs, error out and die.
There are lots of good reasons to run code at compile time, most commonly to generate code, especially tedious and error-prone code. If the language doesn't have good built-in facilities to do that, then people will write separate programs as part of the build, which adds system complexity, which is, in my experience, worse for C than for most other languages.
If a language can remove that build complexity, and the semantics are clear enough to the average programmer (For example, Nim's macro system which originally were highly appealing (and easy) to me as a compiler guy, until I saw how other people find even simple examples completely opaque-- worse than C macros.
What it does have are two features:
1. compile time evaluation of functions - meaning you can write ordinary D code and execute it at compile time, including handling strings
2. a "mixin" statement that has a string as an argument, and the string is compiled as if it were D source code, and that code replaces the mixin statement, and is compiled as usual
Simple and easy.
C++ has more powerful metaprogramming and look how that turned out
Nobody wants to end up like C++ of course, you ain't wrong about that