1. Changelog
-
R2
-
Highlighted the need of a wording change to support this paper.
-
-
R1
-
Amend the discussion regarding the C++17-compatible implementation strategy.
-
Minor fixes.
-
-
R0
-
First submission.
-
2. Motivation and Scope
Consider this example:
using IC = std :: integral_constant < int , 42 > ; IC ic ; std :: variant < float > v = ic ;
All major implementations reject this code (Godbolt). However it is not
entirely clear if the code shouldn’t instead be well-formed, and the variant
containing
converted to
.
The current wording for
’s converting constructor (and similarly
for its converting assignment operator) has been established by [P0608R3], "A
sane variant converting constructor". In the latest Standard draft, [variant.ctor/14] states:
template < class T > constexpr variant ( T && t ) noexcept ( see below ); Let
be a type that is determined as follows: build an imaginary function
Tj for each alternative type
FUN ( Ti ) for which
Ti is well-formed for some invented variable
Ti x [] = { std :: forward < T > ( t )}; . The overload
x selected by overload resolution for the expression
FUN ( Tj ) defines the alternative
FUN ( std :: forward < T > ( t )) which is the type of the contained value after construction.
Tj
Let’s therefore try to build the
overload set for the
example above. Checking whether the
variable
is well-formed can be implemented using the following concept:
template < typename Ti , typename From > concept FUN_constraint = requires ( From && from ) { { std :: type_identity_t < Ti [] > { std :: forward < From > ( from ) } }; };
There is only one alternative type (
), so the associated
overload looks like this:
template < typename T > requires FUN_constraint < float , T > void FUN ( float );
Now, given the construction above:
std :: variant < float > v = ic ;
we therefore need to check if this is well-formed:
// In variant::variant(T &&t) constructor; therefore: // T = IC & // t is lvalue reference to IC FUN < T > ( std :: forward < T > ( t )); // well-formed? // or, equivalently: FUN < IC &> ( ic ); // well-formed?
The answer before the adoption of [P2280R4] ("Using unknown pointers
and references in constant expressions") was no: the call would not
find the
overload because its associated
constraint would not be satisfied.
Since there is no viable
overload, it follows that
is ill-formed.
The purpose of the check in
’s converting constructor is
to exclude narrowing conversions (cf. [P0870R5]). To this end, the
check done with the invented
variable uses a form of
list-initialization specifically because list-initialization bans
narrowing conversions.
The wording in [variant.ctor/14] makes
an array, so that its list-initialization performs aggregate
initialization, and [dcl.init.aggr]/4 applies:
If that initializer is of the form assignment-expression or =
assignment-expression and a narrowing conversion ([dcl.init.list]) is required to convert the expression, the program is ill-formed.
Going back to the opening example: initializing the
object in
the
array from a value of type
requires a conversion sequence.
First the value is converted to
through
’s conversion
operator; then the
value so obtained is converted to
via
a floating-integral conversion ([conv.fpint]/2). Such a
conversion is a narrowing conversion ([dcl.init.list]/7.3),
"except where the source is a constant expression and the actual value
after conversion will fit into the target type and will produce the
original value when converted back to the original type".
Is the source a constant expression in this case? Before [P2280R4], the
answer was always no, because of the usage of a reference type in the
concept’s requires-expression 's argument list (the type of
).
That is, even if:
-
’s conversion operator towardsIC
returns a compile-time constant value (the value of itsint
static data member, which in turn comes from its template parameter), and never actually reads the value ofvalue
(t
) through the reference in order to perform the conversion; andfrom -
’s conversion operator is alsoIC
, so it can be used during constant evaluation; andconstexpr -
is convertible to42
and back tofloat
without any loss of information,int
the pre-[P2280R4] rules in [expr.const] made it a
non-constant expression to merely "mention"
. This triggers a
narrowing conversion (as we no longer are in the "except where" case),
making the initialization of the array of
s ill-formed, and
therefore the
constraint not satisfied.
With the adoption of [P2280R4] some of these limitations have been
lifted. GCC 14, which implements [P2280R4], accepts
the
call
(Godbolt).
At the time of this writing, GCC 14 is also the only compiler
implementing the new rules.
2.1. Why doesn’t the testcase work?
Given this premise, then come that
is
still ill-formed in GCC 14? The answer lies in the implementation of
.
is C++17, and therefore predates concepts. All major
Standard Library implementations use SFINAE to constrain the set of
functions and find the right alternative type to build. In their
"SFINAE triggers", they employ a call to
as the
single element in the array of
s, something along these lines:
// Scheme of SFINAE-based narrowing detection, cf. P0870 template < typename T > struct NarrowingDetector { T x [ 1 ]; }; template < typename From , typename To , typename = void > constexpr inline bool is_convertible_without_narrowing_v = false; template < typename From , typename To > constexpr inline bool is_convertible_without_narrowing_v < From , To , std :: void_t < decltype ( NarrowingDetector < To > { { std :: declval < From > () } }) >> // ^^^^^^^^^^^^^^^^^^^^ = true; // Example usage for constraining FUN(float): template < typename From , std :: enable_if_t < is_convertible_without_narrowing_v < From , float > , bool > = true> void FUN ( float );
Since
itself is not a constexpr function, calling it and
using the return value is not a constant expression, and that ends up
triggering a narrowing conversion;
becomes unviable
(and, consequently,
becomes ill-formed.)
But the Standard never talks about using
for doing this
detection! As shown above, doing the same detection using concepts will
allow for the construction to succeed.
This line of reasoning has resulted in this bug report against libstdc++,
and then, ultimately, in this paper, in order to clarify the behavior
of
’s converting constructor.
2.2. Proposed change
In conclusion: there is a gap between the specification of
’s converting constructor, and what implementations
actually do.
We see two possibilities: to amend
’s specification so that
-
either the introductory example (
) is supposed to work, and therefore current implementations must be fixed in other to support it; orstd :: variant < float > v = ic -
to endorse the fact that the construction of
inx
(as described by [variant.ctor/14]) is supposed to happen outside of a constant evaluation context, and the current behavior by implementations is the expected one.Ti x [] = { std :: forward < T > ( t )};
This paper proposes the first option.
The rationale is that the wording for narrowing conversions in the core
language is specially crafted to take constant expressions into
account; and the wording for
converting constructor
wants to avoid narrowing conversions. In case one can determine that a
conversion can happen without narrowing, then there is no reason for
to reject it. In other words, the behavior of
should not diverge from the core language specification of what does
what doesn’t constitute a narrowing conversion.
The second option would also go against the possible future evolution
of having
function arguments (cf. [P1045R1]), and for
to truly behave like builtin types w.r.t. narrowing
conversions:
constexpr int i = 42 ; float f { i }; // OK std :: variant < float > v { i }; // Ill-formed. No change proposed (or possible, at the moment)
3. Design decisions
3.1. If we allow the example to work, would some user code break?
The behavior made possible by [P2280R4] differs from the one
currently implemented in the sense that the set of viable
overloads (for a given variant specialization and input type) is going
to be the same, or bigger: certain conversions are no longer
considered narrowing, and therefore include the respective
overloads in the overload set used to determine which alternative type
is the active one.
In other words: it will never be the case that a
overload that is
viable according to the current implementations will no longer be; [P2280R4] strictly relaxes the constraints on
.
There is a risk associated with extending the set of viable
overloads: if more than one overload becomes viable, then overload
resolution has to pick the best one. This could break some user code,
or change its behavior.
3.1.1. Variant construction was ill-formed; becomes legal
This is the very first example of this paper:
using IC = std :: integral_constant < int , 42 > ; IC ic ; std :: variant < float > v = ic ; // now : ill-formed // proposed: well-formed
Of course one can concoct examples where code misbehaves in case
becomes true
, but
there is no real-world scenario where this would actually be harmful.
("Clever" user code is already able to detect pretty much any interface change in the Standard Library, via concepts and/or SFINAE. That does not imply that the Standard Library isn’t allowed to evolve, see also [SD-8].)
Note that some ill-formed code may stay ill-formed, although for a different reason:
std :: variant < float , double > v = ic ; // now : ill-formed (no viable FUN overload) // proposed: ill-formed (ambiguous)
3.1.2. Variant construction was legal; becomes ill-formed
In this case, it means that there was one single best
overload;
by relaxing the rules, we end up with multiple overloads that rank
equal, so the call to
becomes ambiguous and
construction becomes ill-formed.
For instance:
IC ic ; std :: variant < long , float > v = ic ; // now : selects long // proposed: ill-formed (ambiguous)
We actually welcome this change, because it is highlighting a
semantic bug in user code. According to core language there is no
reason why
should be preferred here: the two conversions from
to
and to
are equally ranked, and neither is
narrowing:
void f ( long ); void f ( float ); f ( ic ); // ERROR, ambiguous
3.1.3. Variant construction was legal, stays legal, but changes semantics
The most "dangerous" case for end-users would be the case where the
alternative type selected by
’s converting constructor
silently changes.
That is:
From f ; std :: variant < A , B > v = f ; // now : selects A // after: selects B. Is this possible?
We do not believe that this is a possibility.
Note: for the sake of the argument, we are not taking into
consideration the case where
/
/
’s own behavior is
influenced by [P2280R4], for instance by having a
converting constructor with a constrain that itself changed meaning due
to [P2280R4].
We are going once more to refer to:
-
as the imaginary functions described byFUN
’s converting constructor;std :: variant -
its associated constraint
, where one checks ifFUN_constraint
is well-formed.Ti x [] = { std :: forward < T > ( t )};
is each alternative type in the variant,Ti
andT
are the parameters oft
’s converting constructor (i.e.std :: variant
andFrom &
in the example).f
Our reasoning is as follows:
-
For the above code to select
, thenB
andFUN ( A )
are viable, meaningFUN ( B )
converts to bothFrom
andA
and theB
is satisfied for both. Note thatFUN_constraint
was always viable, asFUN ( A )
was selected before. This means that eitherA
was not viable before [P2280R4] and/or it changed its ranking in overload resolution.FUN ( B ) -
[P2280R4] does not affect overload resolution. Therefore,
was not viable before, and now it is, and ranks better thanFUN ( B )
.FUN ( A ) -
[P2280R4]'s impact in
is limited to allowing certain conversions that were previously classified as narrowing (and thus ill-formed, making the constraint unsatisfied). It does not affect any other kind of conversion.FUN_constraint -
Therefore,
was not viable because there was a narrowing conversion betweenFUN ( B )
andFrom
.B -
Narrowing can only happen between certain scalar types (cf. [dcl.init.list/7]). In a user-defined conversion sequence, narrowing can only happen during a standard conversion sequence.
-
Therefore, at least one between
andFrom
must be of scalar type.B -
If they’re both class types, then there’s a user-defined conversion from
toFrom
, which is never narrowing.B -
There cannot be a conversion sequence like
→ scalar type → standard conversion to another scalar type →From
as that would require two user-defined conversions (first and last conversions).B
-
-
must be of class type, otherwise [P2280R4] has no effect on narrowing.From -
If
is a scalar type, then theFrom
check won’t change meaning after [P2280R4]. If the conversion toFUN_constraint
was narrowing before, it is still narrowing afterwards, as we cannot apply the "source is a constant expression" exception to the narrowing rule, because that would require reading the value of theB
source through the reference (From
), and that is still not allowed in a constant expression even after [P2280R4].t
-
-
It follows that
must be of scalar type (otherwise, again, there would be no narrowing happening). There’s a user-defined conversion sequence betweenB
andFrom
.B -
The user-defined conversion sequence (UCS) from
toFrom
must include a standard conversion sequence (SCS) where narrowing was happening before [P2280R4].B -
Narrowing can only happen in a SCS of Conversion rank (cf. [over.ics.scs]), which is the lowest rank possible.
-
A UCS is composed of a SCS, a user-defined conversion, and a second SCS.
-
The first SCS must be of rank Exact Match, as no other rank applies to classes.
-
Therefore, the narrowing was happening in the second SCS, which is the one of Conversion rank.
-
-
The conversion from
toFrom
must be ranked worse than the conversion fromA
toFrom
.B
We have reached the contradiction: this is impossible, because there is simply no way for the conversion from
to
to be worse. At most, it can be of equal rank.
-
To show this, given the UCS from
toFrom
, then the conversion fromB
toFrom
must also be a UCS, whose second SCS is also of Conversion rank. (The first SCS is again Exact Match, as no other applies to a class type, and user-defined conversions don’t have a rank.)A -
Since the ranking is equal, then [over.ics.rank/4] applies, which says that "Two conversion sequences with the same rank are indistinguishable unless one of the following rules applies". All the rules listed exclude the possibility that a narrowing for the conversion to
was happening before [P2280R4], and also that no narrowing was happening forB
:A -
4.1: "A conversion that does not convert a pointer or a pointer to member to
is better than one that does": this would implybool
isA
, meaning that narrowing conversion was already happening forbool
(a pointer to bool conversion is always narrowing), which is impossible, as it satisfiesF ( A )
by hypothesis.FUN_constraint -
4.2: "A conversion that promotes an enumeration whose underlying type is fixed to its underlying type is better than one that promotes to the promoted underlying type, if the two are different": in neither case there is narrowing.
-
4.3: "A conversion in either direction between floating-point type
and floating-point typeFP1
is better than a conversion in the same direction betweenFP2
and arithmetic typeFP1
if the floating-point conversion rank ([conv.rank]) ofT3
is equal to the rank ofFP1
[...]", which excludes the possibility of that there was a narrowing conversion betweenFP2
andFrom
(there can’t be if they’re both floating point types with equal rank).B -
4.4 and subsequent only apply between pointers conversions, where there is no narrowing as per language rules.
-
In all the last cases we have reached a contradiction: the reasoning
starts with
having its constraint satisfied (no narrowing),
and
having its one unsatisfied (narrowing); and the
conclusions contradict this starting point.
Therefore, this breaking change cannot happen.
3.1.4. Conclusions
In conclusion, the change proposed by this paper may result in source code becoming ill-formed in cases where there was an ambiguity to begin with. In any other case there is no behavioral difference.
Once more, due to the lack of
function
parameters (cf. [P1045R1]), the following code won’t change meaning:
constexpr int i = 42 ; float f { i }; // OK, no narrowing std :: variant < float > v { i }; // Still ill-formed
3.2. What about compilers implementing [P2280R4] in C++17?
[P2280R4] has been proposed as a Defect Report, all the way back against
C++11. When using
in C++17 mode, an implementation that
implements [P2280R4] will not be able to use concepts (like the
shown above) in order to express the constraints on its
overloads of
; it must fall back to SFINAE or similar detections.
A viable C++17 implementation strategy has been suggested by Jiang An here (many thanks!): one could hide the fact that
isn’t usable in a constant expression behind one layer
of indirection.
The detection previously shown could be amended like this (Godbolt):
template < typename T > struct NarrowingDetector { T x [ 1 ]; }; // Indirection layer to hide std::declval: template < typename To , typename From > auto is_convertible_without_narrowing_helper ( From && f ) -> decltype ( NarrowingDetector < To > { { std :: forward < From > ( f )} }); // As before: template < typename From , typename To , typename = void > constexpr inline bool is_convertible_without_narrowing_v = false; template < typename From , typename To > constexpr inline bool is_convertible_without_narrowing_v < From , To , std :: void_t < decltype ( is_convertible_without_narrowing_helper < To > ( std :: declval < From > ())) >> = true; // Example usage for constraining FUN(float): template < typename From , std :: enable_if_t < is_convertible_without_narrowing_v < From , float > , bool > = true> void FUN ( float ); // Testcase: int main () { using IC = std :: integral_constant < int , 42 > ; FUN < IC > ( IC {}); // OK after P2280R4 IC ic ; FUN < IC &> ( ic ); // OK after P2280R4 }
4. Impact on the Standard
This proposal clarifies the behavior of
’s converting
constructor and converting assignment operator.
It proposes no other changes to the Standard Library or to the core language.
5. Technical Specifications
5.1. Proposed wording
To be provided. We welcome help from LEWG/LWG in order to find suitable wording.
We are not sure about how to exactly specify the behavior change we seek. From a certain point of view, one could argue that the current wording is already correct; it’s just that implementors have chosen an implementation strategy which is non-conforming in some corner cases.
At a minimum a clarification seems to be in order since all implementations aligned with what we claim to be an incorrect interpretation of the Standard.
Some possible ideas for a change in the wording are:
-
the specification of [variant.ctor/14] and [variant.assign/11] could be clarified to state something along the lines of "the initialization of the invented variable
happens in a core constant expression if possible". Of course, it can’t be changed to say that it must happen in a constant expression, otherwise that would disqualify too much, e.g. non-x
conversion operators;constexpr -
the motivating example of this paper could be added explaining the expected behavior. For instance, the following could be added to [variant.ctor]:
[Example 1:variant < string , bool > v1 = "meow" ; // holds string variant < float , long > v2 = 0 ; // holds long using IC = integral_constant < int , 42 > ; variant < float , string > v3 = IC {}; // holds float — end example]
6. Acknowledgements
Thanks to KDAB for supporting this work.
All remaining errors are ours and ours only.