Doc. no. N2371=07-0231
Date: 2007-08-05
Project: Programming Language C++
Reply to: Howard Hinnant <howard.hinnant@gmail.com>

C++ Standard Library Active Issues List (Revision R50)

Reference ISO/IEC IS 14882:1998(E)

Also see:

The purpose of this document is to record the status of issues which have come before the Library Working Group (LWG) of the ANSI (J16) and ISO (WG21) C++ Standards Committee. Issues represent potential defects in the ISO/IEC IS 14882:1998(E) document. Issues are not to be used to request new features.

This document contains only library issues which are actively being considered by the Library Working Group. That is, issues which have a status of New, Open, Ready, and Review. See Library Defect Reports List for issues considered defects and Library Closed Issues List for issues considered closed.

The issues in these lists are not necessarily formal ISO Defect Reports (DR's). While some issues will eventually be elevated to official Defect Report status, other issues will be disposed of in other ways. See Issue Status.

This document is in an experimental format designed for both viewing via a world-wide web browser and hard-copy printing. It is available as an HTML file for browsing or PDF file for printing.

Prior to Revision 14, library issues lists existed in two slightly different versions; a Committee Version and a Public Version. Beginning with Revision 14 the two versions were combined into a single version.

This document includes [bracketed italicized notes] as a reminder to the LWG of current progress on issues. Such notes are strictly unofficial and should be read with caution as they may be incomplete or incorrect. Be aware that LWG support for a particular resolution can quickly change if new viewpoints or killer examples are presented in subsequent discussions.

For the most current official version of this document see http://www.open-std.org/jtc1/sc22/wg21/. Requests for further information about this document should include the document number above, reference ISO/IEC 14882:1998(E), and be submitted to Information Technology Industry Council (ITI), 1250 Eye Street NW, Washington, DC 20005.

Public information as to how to obtain a copy of the C++ Standard, join the standards committee, submit an issue, or comment on an issue can be found in the comp.std.c++ FAQ. Public discussion of C++ Standard related issues occurs on news:comp.std.c++.

For committee members, files available on the committee's private web site include the HTML version of the Standard itself. HTML hyperlinks from this issues list to those files will only work for committee members who have downloaded them into the same disk directory as the issues list files.

Revision History

Issue Status

New - The issue has not yet been reviewed by the LWG. Any Proposed Resolution is purely a suggestion from the issue submitter, and should not be construed as the view of LWG.

Open - The LWG has discussed the issue but is not yet ready to move the issue forward. There are several possible reasons for open status:

A Proposed Resolution for an open issue is still not be construed as the view of LWG. Comments on the current state of discussions are often given at the end of open issues in an italic font. Such comments are for information only and should not be given undue importance.

Dup - The LWG has reached consensus that the issue is a duplicate of another issue, and will not be further dealt with. A Rationale identifies the duplicated issue's issue number.

NAD - The LWG has reached consensus that the issue is not a defect in the Standard.

NAD Editorial - The LWG has reached consensus that the issue can either be handled editorially, or is handled by a paper (usually linked to in the rationale).

NAD Future - In addition to the regular status, the LWG believes that this issue should be revisited at the next revision of the standard.

Review - Exact wording of a Proposed Resolution is now available for review on an issue for which the LWG previously reached informal consensus.

Tentatively Ready - The issue has been reviewed online, but not in a meeting, and some support has been formed for the proposed resolution. Tentatively Ready issues may be moved to Ready and forwarded to full committee within the same meeting. Unlike Ready issues they will be reviewed in subcommittee prior to forwarding to full committee.

Ready - The LWG has reached consensus that the issue is a defect in the Standard, the Proposed Resolution is correct, and the issue is ready to forward to the full committee for further action as a Defect Report (DR).

DR - (Defect Report) - The full J16 committee has voted to forward the issue to the Project Editor to be processed as a Potential Defect Report. The Project Editor reviews the issue, and then forwards it to the WG21 Convenor, who returns it to the full committee for final disposition. This issues list accords the status of DR to all these Defect Reports regardless of where they are in that process.

TC - (Technical Corrigenda) - The full WG21 committee has voted to accept the Defect Report's Proposed Resolution as a Technical Corrigenda. Action on this issue is thus complete and no further action is possible under ISO rules.

TRDec - (Decimal TR defect) - The LWG has voted to accept the Defect Report's Proposed Resolution into the Decimal TR. Action on this issue is thus complete and no further action is expected.

WP - (Working Paper) - The proposed resolution has not been accepted as a Technical Corrigendum, but the full WG21 committee has voted to apply the Defect Report's Proposed Resolution to the working paper.

Pending - This is a status qualifier. When prepended to a status this indicates the issue has been processed by the committee, and a decision has been made to move the issue to the associated unqualified status. However for logistical reasons the indicated outcome of the issue has not yet appeared in the latest working paper.

Issues are always given the status of New when they first appear on the issues list. They may progress to Open or Review while the LWG is actively working on them. When the LWG has reached consensus on the disposition of an issue, the status will then change to Dup, NAD, or Ready as appropriate. Once the full J16 committee votes to forward Ready issues to the Project Editor, they are given the status of Defect Report ( DR). These in turn may become the basis for Technical Corrigenda (TC), or are closed without action other than a Record of Response (RR ). The intent of this LWG process is that only issues which are truly defects in the Standard move to the formal ISO DR status.

Active Issues


23. Num_get overflow result

Section: 22.2.2.1.2 [facet.num.get.virtuals] Status: Open Submitter: Nathan Myers Date: 1998-08-06

View other active issues in [facet.num.get.virtuals].

View all other issues in [facet.num.get.virtuals].

View all issues with Open status.

Discussion:

The current description of numeric input does not account for the possibility of overflow. This is an implicit result of changing the description to rely on the definition of scanf() (which fails to report overflow), and conflicts with the documented behavior of traditional and current implementations.

Users expect, when reading a character sequence that results in a value unrepresentable in the specified type, to have an error reported. The standard as written does not permit this.

Further comments from Dietmar:

I don't feel comfortable with the proposed resolution to issue 23: It kind of simplifies the issue to much. Here is what is going on:

Currently, the behavior of numeric overflow is rather counter intuitive and hard to trace, so I will describe it briefly:

Further discussion from Redmond:

The basic problem is that we've defined our behavior, including our error-reporting behavior, in terms of C90. However, C90's method of reporting overflow in scanf is not technically an "input error". The strto_* functions are more precise.

There was general consensus that failbit should be set upon overflow. We considered three options based on this:

  1. Set failbit upon conversion error (including overflow), and don't store any value.
  2. Set failbit upon conversion error, and also set errno to indicated the precise nature of the error.
  3. Set failbit upon conversion error. If the error was due to overflow, store +-numeric_limits<T>::max() as an overflow indication.

Straw poll: (1) 5; (2) 0; (3) 8.

Discussed at Lillehammer. General outline of what we want the solution to look like: we want to say that overflow is an error, and provide a way to distinguish overflow from other kinds of errors. Choose candidate field the same way scanf does, but don't describe the rest of the process in terms of format. If a finite input field is too large (positive or negative) to be represented as a finite value, then set failbit and assign the nearest representable value. Bill will provide wording.

Discussed at Toronto: N2327 is in alignment with the direction we wanted to go with in Lillehammer. Bill to work on.

Proposed resolution:


96. Vector<bool> is not a container

Section: 23.2.5 [vector] Status: Open Submitter: AFNOR Date: 1998-10-07

View all other issues in [vector].

View all issues with Open status.

Discussion:

vector<bool> is not a container as its reference and pointer types are not references and pointers.

Also it forces everyone to have a space optimization instead of a speed one.

See also: 99-0008 == N1185 Vector<bool> is Nonconforming, Forces Optimization Choice.

[In Santa Cruz the LWG felt that this was Not A Defect.]

[In Dublin many present felt that failure to meet Container requirements was a defect. There was disagreement as to whether or not the optimization requirements constituted a defect.]

[The LWG looked at the following resolutions in some detail:
     * Not A Defect.
     * Add a note explaining that vector<bool> does not meet Container requirements.
     * Remove vector<bool>.
     * Add a new category of container requirements which vector<bool> would meet.
     * Rename vector<bool>.

No alternative had strong, wide-spread, support and every alternative had at least one "over my dead body" response.

There was also mention of a transition scheme something like (1) add vector_bool and deprecate vector<bool> in the next standard. (2) Remove vector<bool> in the following standard.]

[Modifying container requirements to permit returning proxies (thus allowing container requirements conforming vector<bool>) was also discussed.]

[It was also noted that there is a partial but ugly workaround in that vector<bool> may be further specialized with a customer allocator.]

[Kona: Herb Sutter presented his paper J16/99-0035==WG21/N1211, vector<bool>: More Problems, Better Solutions. Much discussion of a two step approach: a) deprecate, b) provide replacement under a new name. LWG straw vote on that: 1-favor, 11-could live with, 2-over my dead body. This resolution was mentioned in the LWG report to the full committee, where several additional committee members indicated over-my-dead-body positions.]

Discussed at Lillehammer. General agreement that we should deprecate vector<bool> and introduce this functionality under a different name, e.g. bit_vector. This might make it possible to remove the vector<bool> specialization in the standard that comes after C++0x. There was also a suggestion that in C++0x we could additional say that it's implementation defined whether vector<bool> refers to the specialization or to the primary template, but there wasn't general agreement that this was a good idea.

We need a paper for the new bit_vector class.

Proposed resolution:

We now have: N2050 and N2160.

[ Batavia: The LWG feels we need something closer to SGI's bitvector to ease migration from vector<bool>. Although some of the funcitonality from N2050 could well be used in such a template. The concern is easing the API migration for those users who want to continue using a bit-packed container. Alan and Beman to work. ]


255. Why do basic_streambuf<>::pbump() and gbump() take an int?

Section: 27.5.2 [streambuf] Status: Open Submitter: Martin Sebor Date: 2000-08-12

View all other issues in [streambuf].

View all issues with Open status.

Discussion:

The basic_streambuf members gbump() and pbump() are specified to take an int argument. This requirement prevents the functions from effectively manipulating buffers larger than std::numeric_limits<int>::max() characters. It also makes the common use case for these functions somewhat difficult as many compilers will issue a warning when an argument of type larger than int (such as ptrdiff_t on LLP64 architectures) is passed to either of the function. Since it's often the result of the subtraction of two pointers that is passed to the functions, a cast is necessary to silence such warnings. Finally, the usage of a native type in the functions signatures is inconsistent with other member functions (such as sgetn() and sputn()) that manipulate the underlying character buffer. Those functions take a streamsize argument.

Proposed resolution:

Change the signatures of these functions in the synopsis of template class basic_streambuf (27.5.2) and in their descriptions (27.5.2.3.1, p4 and 27.5.2.3.2, p4) to take a streamsize argument.

Although this change has the potential of changing the ABI of the library, the change will affect only platforms where int is different than the definition of streamsize. However, since both functions are typically inline (they are on all known implementations), even on such platforms the change will not affect any user code unless it explicitly relies on the existing type of the functions (e.g., by taking their address). Such a possibility is IMO quite remote.

Alternate Suggestion from Howard Hinnant, c++std-lib-7780:

This is something of a nit, but I'm wondering if streamoff wouldn't be a better choice than streamsize. The argument to pbump and gbump MUST be signed. But the standard has this to say about streamsize (27.4.1/2/Footnote):

[Footnote: streamsize is used in most places where ISO C would use size_t. Most of the uses of streamsize could use size_t, except for the strstreambuf constructors, which require negative values. It should probably be the signed type corresponding to size_t (which is what Posix.2 calls ssize_t). --- end footnote]

This seems a little weak for the argument to pbump and gbump. Should we ever really get rid of strstream, this footnote might go with it, along with the reason to make streamsize signed.

Rationale:

The LWG believes this change is too big for now. We may wish to reconsider this for a future revision of the standard. One possibility is overloading pbump, rather than changing the signature.

[ [2006-05-04: Reopened at the request of Chris (Krzysztof ?elechowski)] ]


258. Missing allocator requirement

Section: 20.1.2 [allocator.requirements] Status: Open Submitter: Matt Austern Date: 2000-08-22

View other active issues in [allocator.requirements].

View all other issues in [allocator.requirements].

View all issues with Open status.

Discussion:

From lib-7752:

I've been assuming (and probably everyone else has been assuming) that allocator instances have a particular property, and I don't think that property can be deduced from anything in Table 32.

I think we have to assume that allocator type conversion is a homomorphism. That is, if x1 and x2 are of type X, where X::value_type is T, and if type Y is X::template rebind<U>::other, then Y(x1) == Y(x2) if and only if x1 == x2.

Further discussion: Howard Hinnant writes, in lib-7757:

I think I can prove that this is not provable by Table 32. And I agree it needs to be true except for the "and only if". If x1 != x2, I see no reason why it can't be true that Y(x1) == Y(x2). Admittedly I can't think of a practical instance where this would happen, or be valuable. But I also don't see a need to add that extra restriction. I think we only need:

if (x1 == x2) then Y(x1) == Y(x2)

If we decide that == on allocators is transitive, then I think I can prove the above. But I don't think == is necessarily transitive on allocators. That is:

Given x1 == x2 and x2 == x3, this does not mean x1 == x3.

Example:

x1 can deallocate pointers from: x1, x2, x3
x2 can deallocate pointers from: x1, x2, x4
x3 can deallocate pointers from: x1, x3
x4 can deallocate pointers from: x2, x4

x1 == x2, and x2 == x4, but x1 != x4

[Toronto: LWG members offered multiple opinions. One opinion is that it should not be required that x1 == x2 implies Y(x1) == Y(x2), and that it should not even be required that X(x1) == x1. Another opinion is that the second line from the bottom in table 32 already implies the desired property. This issue should be considered in light of other issues related to allocator instances.]

Proposed resolution:

Add row to Table 35: Allocator requirements, right after the row defining operator!=:

If a1 == a2 then Y(a1) == Y(a2)

[Lillehammer: Same conclusion as before: this should be considered as part of an allocator redesign, not solved on its own.]

[ Batavia: An allocator redesign is not forthcoming and thus we fixed this one issue. ]

[ Toronto: Reopened at the request of the project editor (Pete) because the proposed wording did not fit within the indicated table. The intent of the resolution remains unchanged. Pablo to work with Pete on improved wording. ]


290. Requirements to for_each and its function object

Section: 25.1.1 [alg.foreach] Status: Open Submitter: Angelika Langer Date: 2001-01-03

View all other issues in [alg.foreach].

View all issues with Open status.

Discussion:

The specification of the for_each algorithm does not have a "Requires" section, which means that there are no restrictions imposed on the function object whatsoever. In essence it means that I can provide any function object with arbitrary side effects and I can still expect a predictable result. In particular I can expect that the function object is applied exactly last - first times, which is promised in the "Complexity" section.

I don't see how any implementation can give such a guarantee without imposing requirements on the function object.

Just as an example: consider a function object that removes elements from the input sequence. In that case, what does the complexity guarantee (applies f exactly last - first times) mean?

One can argue that this is obviously a nonsensical application and a theoretical case, which unfortunately it isn't. I have seen programmers shooting themselves in the foot this way, and they did not understand that there are restrictions even if the description of the algorithm does not say so.

[Lillehammer: This is more general than for_each. We don't want the function object in transform invalidiating iterators either. There should be a note somewhere in clause 17 (17, not 25) saying that user code operating on a range may not invalidate iterators unless otherwise specified. Bill will provide wording.]

Proposed resolution:


299. Incorrect return types for iterator dereference

Section: 24.1.4 [bidirectional.iterators], 24.1.5 [random.access.iterators] Status: Open Submitter: John Potter Date: 2001-01-22

View all other issues in [bidirectional.iterators].

View all issues with Open status.

Discussion:

In section 24.1.4 [bidirectional.iterators], Table 75 gives the return type of *r-- as convertible to T. This is not consistent with Table 74 which gives the return type of *r++ as T&. *r++ = t is valid while *r-- = t is invalid.

In section 24.1.5 [random.access.iterators], Table 76 gives the return type of a[n] as convertible to T. This is not consistent with the semantics of *(a + n) which returns T& by Table 74. *(a + n) = t is valid while a[n] = t is invalid.

Discussion from the Copenhagen meeting: the first part is uncontroversial. The second part, operator[] for Random Access Iterators, requires more thought. There are reasonable arguments on both sides. Return by value from operator[] enables some potentially useful iterators, e.g. a random access "iota iterator" (a.k.a "counting iterator" or "int iterator"). There isn't any obvious way to do this with return-by-reference, since the reference would be to a temporary. On the other hand, reverse_iterator takes an arbitrary Random Access Iterator as template argument, and its operator[] returns by reference. If we decided that the return type in Table 76 was correct, we would have to change reverse_iterator. This change would probably affect user code.

History: the contradiction between reverse_iterator and the Random Access Iterator requirements has been present from an early stage. In both the STL proposal adopted by the committee (N0527==94-0140) and the STL technical report (HPL-95-11 (R.1), by Stepanov and Lee), the Random Access Iterator requirements say that operator[]'s return value is "convertible to T". In N0527 reverse_iterator's operator[] returns by value, but in HPL-95-11 (R.1), and in the STL implementation that HP released to the public, reverse_iterator's operator[] returns by reference. In 1995, the standard was amended to reflect the contents of HPL-95-11 (R.1). The original intent for operator[] is unclear.

In the long term it may be desirable to add more fine-grained iterator requirements, so that access method and traversal strategy can be decoupled. (See "Improved Iterator Categories and Requirements", N1297 = 01-0011, by Jeremy Siek.) Any decisions about issue 299 should keep this possibility in mind.

Further discussion: I propose a compromise between John Potter's resolution, which requires T& as the return type of a[n], and the current wording, which requires convertible to T. The compromise is to keep the convertible to T for the return type of the expression a[n], but to also add a[n] = t as a valid expression. This compromise "saves" the common case uses of random access iterators, while at the same time allowing iterators such as counting iterator and caching file iterators to remain random access iterators (iterators where the lifetime of the object returned by operator*() is tied to the lifetime of the iterator).

Note that the compromise resolution necessitates a change to reverse_iterator. It would need to use a proxy to support a[n] = t.

Note also there is one kind of mutable random access iterator that will no longer meet the new requirements. Currently, iterators that return an r-value from operator[] meet the requirements for a mutable random access iterartor, even though the expression a[n] = t will only modify a temporary that goes away. With this proposed resolution, a[n] = t will be required to have the same operational semantics as *(a + n) = t.

Proposed resolution:

In section 24.1.4 [lib.bidirectdional.iterators], change the return type in table 75 from "convertible to T" to T&.

In section 24.1.5 [lib.random.access.iterators], change the operational semantics for a[n] to " the r-value of a[n] is equivalent to the r-value of *(a + n)". Add a new row in the table for the expression a[n] = t with a return type of convertible to T and operational semantics of *(a + n) = t.

[Lillehammer: Real problem, but should be addressed as part of iterator redesign]


309. Does sentry catch exceptions?

Section: 27.6 [iostream.format] Status: Open Submitter: Martin Sebor Date: 2001-03-19

View other active issues in [iostream.format].

View all other issues in [iostream.format].

View all issues with Open status.

Discussion:

The descriptions of the constructors of basic_istream<>::sentry (27.6.1.1.3 [istream::sentry]) and basic_ostream<>::sentry (27.6.2.4 [ostream::sentry]) do not explain what the functions do in case an exception is thrown while they execute. Some current implementations allow all exceptions to propagate, others catch them and set ios_base::badbit instead, still others catch some but let others propagate.

The text also mentions that the functions may call setstate(failbit) (without actually saying on what object, but presumably the stream argument is meant). That may have been fine for basic_istream<>::sentry prior to issue 195, since the function performs an input operation which may fail. However, issue 195 amends 27.6.1.1.3 [istream::sentry], p2 to clarify that the function should actually call setstate(failbit | eofbit), so the sentence in p3 is redundant or even somewhat contradictory.

The same sentence that appears in 27.6.2.4 [ostream::sentry], p3 doesn't seem to be very meaningful for basic_istream<>::sentry which performs no input. It is actually rather misleading since it would appear to guide library implementers to calling setstate(failbit) when os.tie()->flush(), the only called function, throws an exception (typically, it's badbit that's set in response to such an event).

Additional comments from Martin, who isn't comfortable with the current proposed resolution (see c++std-lib-11530)

The istream::sentry ctor says nothing about how the function deals with exemptions (27.6.1.1.2, p1 says that the class is responsible for doing "exception safe"(*) prefix and suffix operations but it doesn't explain what level of exception safety the class promises to provide). The mockup example of a "typical implementation of the sentry ctor" given in 27.6.1.1.2, p6, removed in ISO/IEC 14882:2003, doesn't show exception handling, either. Since the ctor is not classified as a formatted or unformatted input function, the text in 27.6.1.1, p1 through p4 does not apply. All this would seem to suggest that the sentry ctor should not catch or in any way handle exceptions thrown from any functions it may call. Thus, the typical implementation of an istream extractor may look something like [1].

The problem with [1] is that while it correctly sets ios::badbit if an exception is thrown from one of the functions called from the sentry ctor, if the sentry ctor reaches EOF while extracting whitespace from a stream that has eofbit or failbit set in exceptions(), it will cause an ios::failure to be thrown, which will in turn cause the extractor to set ios::badbit.

The only straightforward way to prevent this behavior is to move the definition of the sentry object in the extractor above the try block (as suggested by the example in 22.2.8, p9 and also indirectly supported by 27.6.1.3, p1). See [2]. But such an implementation will allow exceptions thrown from functions called from the ctor to freely propagate to the caller regardless of the setting of ios::badbit in the stream object's exceptions().

So since neither [1] nor [2] behaves as expected, the only possible solution is to have the sentry ctor catch exceptions thrown from called functions, set badbit, and propagate those exceptions if badbit is also set in exceptions(). (Another solution exists that deals with both kinds of sentries, but the code is non-obvious and cumbersome -- see [3].)

Please note that, as the issue points out, current libraries do not behave consistently, suggesting that implementors are not quite clear on the exception handling in istream::sentry, despite the fact that some LWG members might feel otherwise. (As documented by the parenthetical comment here: http://anubis.dkuug.dk/jtc1/sc22/wg21/docs/papers/2003/n1480.html#309)

Also please note that those LWG members who in Copenhagen felt that "a sentry's constructor should not catch exceptions, because sentries should only be used within (un)formatted input functions and that exception handling is the responsibility of those functions, not of the sentries," as noted here http://anubis.dkuug.dk/jtc1/sc22/wg21/docs/papers/2001/n1310.html#309 would in effect be either arguing for the behavior described in [1] or for extractors implemented along the lines of [3].

The original proposed resolution (Revision 25 of the issues list) clarifies the role of the sentry ctor WRT exception handling by making it clear that extractors (both library or user-defined) should be implemented along the lines of [2] (as opposed to [1]) and that no exception thrown from the callees should propagate out of either function unless badbit is also set in exceptions().

[1] Extractor that catches exceptions thrown from sentry:

struct S { long i; };

istream& operator>> (istream &strm, S &s)
{
    ios::iostate err = ios::goodbit;
    try {
        const istream::sentry guard (strm, false);
        if (guard) {
            use_facet<num_get<char> >(strm.getloc ())
                .get (istreambuf_iterator<char>(strm),
                      istreambuf_iterator<char>(),
                      strm, err, s.i);
        }
    }
    catch (...) {
        bool rethrow;
        try {
            strm.setstate (ios::badbit);
            rethrow = false;
        }
        catch (...) {
            rethrow = true;
        }
        if (rethrow)
            throw;
    }
    if (err)
        strm.setstate (err);
    return strm;
}

[2] Extractor that propagates exceptions thrown from sentry:

istream& operator>> (istream &strm, S &s)
{
    istream::sentry guard (strm, false);
    if (guard) {
        ios::iostate err = ios::goodbit;
        try {
            use_facet<num_get<char> >(strm.getloc ())
                .get (istreambuf_iterator<char>(strm),
                      istreambuf_iterator<char>(),
                      strm, err, s.i);
        }
        catch (...) {
            bool rethrow;
            try {
                strm.setstate (ios::badbit);
                rethrow = false;
            }
            catch (...) {
                rethrow = true;
            }
            if (rethrow)
                throw;
        }
        if (err)
            strm.setstate (err);
    }
    return strm;
}

[3] Extractor that catches exceptions thrown from sentry but doesn't set badbit if the exception was thrown as a result of a call to strm.clear().

istream& operator>> (istream &strm, S &s)
{
    const ios::iostate state = strm.rdstate ();
    const ios::iostate except = strm.exceptions ();
    ios::iostate err = std::ios::goodbit;
    bool thrown = true;
    try {
        const istream::sentry guard (strm, false);
        thrown = false;
        if (guard) {
            use_facet<num_get<char> >(strm.getloc ())
                .get (istreambuf_iterator<char>(strm),
                      istreambuf_iterator<char>(),
                      strm, err, s.i);
        }
    }
    catch (...) {
        if (thrown && state & except)
            throw;
        try {
            strm.setstate (ios::badbit);
            thrown = false;
        }
        catch (...) {
            thrown = true;
        }
        if (thrown)
            throw;
    }
    if (err)
        strm.setstate (err);

    return strm;
}

[Pre-Berlin] Reopened at the request of Paolo Carlini and Steve Clamage.

[Pre-Portland] A relevant newsgroup post:

The current proposed resolution of issue #309 (http://www.open-std.org/jtc1/sc22/wg21/docs/lwg-active.html#309) is unacceptable. I write commerical software and coding around this makes my code ugly, non-intuitive, and requires comments referring people to this very issue. Following is the full explanation of my experience.

In the course of writing software for commercial use, I constructed std::ifstream's based on user-supplied pathnames on typical POSIX systems.

It was expected that some files that opened successfully might not read successfully -- such as a pathname which actually refered to a directory. Intuitively, I expected the streambuffer underflow() code to throw an exception in this situation, and recent implementations of libstdc++'s basic_filebuf do just that (as well as many of my own custom streambufs).

I also intuitively expected that the istream code would convert these exceptions to the "badbit' set on the stream object, because I had not requested exceptions. I refer to 27.6.1.1. P4.

However, this was not the case on at least two implementations -- if the first thing I did with an istream was call operator>>( T& ) for T among the basic arithmetic types and std::string. Looking further I found that the sentry's constructor was invoking the exception when it pre-scanned for whitespace, and the extractor function (operator>>()) was not catching exceptions in this situation.

So, I was in a situation where setting 'noskipws' would change the istream's behavior even though no characters (whitespace or not) could ever be successfully read.

Also, calling .peek() on the istream before calling the extractor() changed the behavior (.peek() had the effect of setting the badbit ahead of time).

I found this all to be so inconsistent and inconvenient for me and my code design, that I filed a bugzilla entry for libstdc++. I was then told that the bug cannot be fixed until issue #309 is resolved by the committee.

Proposed resolution:

Rationale:

The LWG agrees there is minor variation between implementations, but believes that it doesn't matter. This is a rarely used corner case. There is no evidence that this has any commercial importance or that it causes actual portability problems for customers trying to write code that runs on multiple implementations.


342. seek and eofbit

Section: 27.6.1.3 [istream.unformatted] Status: Open Submitter: Howard Hinnant Date: 2001-10-09

View all other issues in [istream.unformatted].

View all issues with Open status.

Discussion:

I think we have a defect.

According to lwg issue 60 which is now a dr, the description of seekg in 27.6.1.3 [istream.unformatted] paragraph 38 now looks like:

Behaves as an unformatted input function (as described in 27.6.1.3, paragraph 1), except that it does not count the number of characters extracted and does not affect the value returned by subsequent calls to gcount(). After constructing a sentry object, if fail() != true, executes rdbuf()->pubseekpos( pos).

And according to lwg issue 243 which is also now a dr, 27.6.1.3, paragraph 1 looks like:

Each unformatted input function begins execution by constructing an object of class sentry with the default argument noskipws (second) argument true. If the sentry object returns true, when converted to a value of type bool, the function endeavors to obtain the requested input. Otherwise, if the sentry constructor exits by throwing an exception or if the sentry object returns false, when converted to a value of type bool, the function returns without attempting to obtain any input. In either case the number of extracted characters is set to 0; unformatted input functions taking a character array of non-zero size as an argument shall also store a null character (using charT()) in the first location of the array. If an exception is thrown during input then ios::badbit is turned on in *this'ss error state. If (exception()&badbit)!= 0 then the exception is rethrown. It also counts the number of characters extracted. If no exception has been thrown it ends by storing the count in a member object and returning the value specified. In any event the sentry object is destroyed before leaving the unformatted input function.

And finally 27.6.1.1.2/5 says this about sentry:

If, after any preparation is completed, is.good() is true, ok_ != false otherwise, ok_ == false.

So although the seekg paragraph says that the operation proceeds if !fail(), the behavior of unformatted functions says the operation proceeds only if good(). The two statements are contradictory when only eofbit is set. I don't think the current text is clear which condition should be respected.

Further discussion from Redmond:

PJP: It doesn't seem quite right to say that seekg is "unformatted". That makes specific claims about sentry that aren't quite appropriate for seeking, which has less fragile failure modes than actual input. If we do really mean that it's unformatted input, it should behave the same way as other unformatted input. On the other hand, "principle of least surprise" is that seeking from EOF ought to be OK.

Pre-Berlin: Paolo points out several problems with the proposed resolution in Ready state:

Proposed resolution:

Change 27.6.1.3 [istream.unformatted] to:

Behaves as an unformatted input function (as described in 27.6.1.3, paragraph 1), except that it does not count the number of characters extracted, does not affect the value returned by subsequent calls to gcount(), and does not examine the value returned by the sentry object. After constructing a sentry object, if fail() != true, executes rdbuf()->pubseekpos(pos). In case of success, the function calls clear(). In case of failure, the function calls setstate(failbit) (which may throw ios_base::failure).

[Lillehammer: Matt provided wording.]

Rationale:

In C, fseek does clear EOF. This is probably what most users would expect. We agree that having eofbit set should not deter a seek, and that a successful seek should clear eofbit. Note that fail() is true only if failbit or badbit is set, so using !fail(), rather than good(), satisfies this goal.


343. Unspecified library header dependencies

Section: 21 [strings], 23 [containers], 27 [input.output] Status: Open Submitter: Martin Sebor Date: 2001-10-09

View all other issues in [strings].

View all issues with Open status.

Discussion:

The synopses of the C++ library headers clearly show which names are required to be defined in each header. Since in order to implement the classes and templates defined in these headers declarations of other templates (but not necessarily their definitions) are typically necessary the standard in 17.4.4, p1 permits library implementers to include any headers needed to implement the definitions in each header.

For instance, although it is not explicitly specified in the synopsis of <string>, at the point of definition of the std::basic_string template the declaration of the std::allocator template must be in scope. All current implementations simply include <memory> from within <string>, either directly or indirectly, to bring the declaration of std::allocator into scope.

Additionally, however, some implementation also include <istream> and <ostream> at the top of <string> to bring the declarations of std::basic_istream and std::basic_ostream into scope (which are needed in order to implement the string inserter and extractor operators (21.3.7.9 [lib.string.io])). Other implementations only include <iosfwd>, since strictly speaking, only the declarations and not the full definitions are necessary.

Obviously, it is possible to implement <string> without actually providing the full definitions of all the templates std::basic_string uses (std::allocator, std::basic_istream, and std::basic_ostream). Furthermore, not only is it possible, doing so is likely to have a positive effect on compile-time efficiency.

But while it may seem perfectly reasonable to expect a program that uses the std::basic_string insertion and extraction operators to also explicitly include <istream> or <ostream>, respectively, it doesn't seem reasonable to also expect it to explicitly include <memory>. Since what's reasonable and what isn't is highly subjective one would expect the standard to specify what can and what cannot be assumed. Unfortunately, that isn't the case.

The examples below demonstrate the issue.

Example 1:

It is not clear whether the following program is complete:

#include <string>

extern std::basic_ostream<char> &strm;

int main () {
    strm << std::string ("Hello, World!\n");
}

or whether one must explicitly include <memory> or <ostream> (or both) in addition to <string> in order for the program to compile.

Example 2:

Similarly, it is unclear whether the following program is complete:

#include <istream>

extern std::basic_iostream<char> &strm;

int main () {
    strm << "Hello, World!\n";
}

or whether one needs to explicitly include <ostream>, and perhaps even other headers containing the definitions of other required templates:

#include <ios>
#include <istream>
#include <ostream>
#include <streambuf>

extern std::basic_iostream<char> &strm;

int main () {
    strm << "Hello, World!\n";
}

Example 3:

Likewise, it seems unclear whether the program below is complete:

#include <iterator>

bool foo (std::istream_iterator<int> a, std::istream_iterator<int> b)
{
    return a == b;
}

int main () { }

or whether one should be required to include <istream>.

There are many more examples that demonstrate this lack of a requirement. I believe that in a good number of cases it would be unreasonable to require that a program explicitly include all the headers necessary for a particular template to be specialized, but I think that there are cases such as some of those above where it would be desirable to allow implementations to include only as much as necessary and not more.

Proposed resolution:

For every C++ library header, supply a minimum set of other C++ library headers that are required to be included by that header. The proposed list is below (C++ headers for C Library Facilities, table 12 in 17.4.1.2, p3, are omitted):

+------------+--------------------+
| C++ header |required to include |
+============+====================+
|<algorithm> |                    |
+------------+--------------------+
|<bitset>    |                    |
+------------+--------------------+
|<complex>   |                    |
+------------+--------------------+
|<deque>     |<memory>            |
+------------+--------------------+
|<exception> |                    |
+------------+--------------------+
|<fstream>   |<ios>               |
+------------+--------------------+
|<functional>|                    |
+------------+--------------------+
|<iomanip>   |<ios>               |
+------------+--------------------+
|<ios>       |<streambuf>         |
+------------+--------------------+
|<iosfwd>    |                    |
+------------+--------------------+
|<iostream>  |<istream>, <ostream>|
+------------+--------------------+
|<istream>   |<ios>               |
+------------+--------------------+
|<iterator>  |                    |
+------------+--------------------+
|<limits>    |                    |
+------------+--------------------+
|<list>      |<memory>            |
+------------+--------------------+
|<locale>    |                    |
+------------+--------------------+
|<map>       |<memory>            |
+------------+--------------------+
|<memory>    |                    |
+------------+--------------------+
|<new>       |<exception>         |
+------------+--------------------+
|<numeric>   |                    |
+------------+--------------------+
|<ostream>   |<ios>               |
+------------+--------------------+
|<queue>     |<deque>             |
+------------+--------------------+
|<set>       |<memory>            |
+------------+--------------------+
|<sstream>   |<ios>, <string>     |
+------------+--------------------+
|<stack>     |<deque>             |
+------------+--------------------+
|<stdexcept> |                    |
+------------+--------------------+
|<streambuf> |<ios>               |
+------------+--------------------+
|<string>    |<memory>            |
+------------+--------------------+
|<strstream> |                    |
+------------+--------------------+
|<typeinfo>  |<exception>         |
+------------+--------------------+
|<utility>   |                    |
+------------+--------------------+
|<valarray>  |                    |
+------------+--------------------+
|<vector>    |<memory>            |
+------------+--------------------+

Rationale:

The portability problem is real. A program that works correctly on one implementation might fail on another, because of different header dependencies. This problem was understood before the standard was completed, and it was a conscious design choice.

One possible way to deal with this, as a library extension, would be an <all> header.

Hinnant: It's time we dealt with this issue for C++0X. Reopened.


382. codecvt do_in/out result

Section: 22.2.1.4 [locale.codecvt] Status: Open Submitter: Martin Sebor Date: 2002-08-30

View all other issues in [locale.codecvt].

View all issues with Open status.

Discussion:

It seems that the descriptions of codecvt do_in() and do_out() leave sufficient room for interpretation so that two implementations of codecvt may not work correctly with the same filebuf. Specifically, the following seems less than adequately specified:

  1. the conditions under which the functions terminate
  2. precisely when the functions return ok
  3. precisely when the functions return partial
  4. the full set of conditions when the functions return error
  1. 22.2.1.4.2 [locale.codecvt.virtuals], p2 says this about the effects of the function: ...Stops if it encounters a character it cannot convert... This assumes that there *is* a character to convert. What happens when there is a sequence that doesn't form a valid source character, such as an unassigned or invalid UNICODE character, or a sequence that cannot possibly form a character (e.g., the sequence "\xc0\xff" in UTF-8)?
  2. Table 53 says that the function returns codecvt_base::ok to indicate that the function(s) "completed the conversion." Suppose that the source sequence is "\xc0\x80" in UTF-8, with from pointing to '\xc0' and (from_end==from + 1). It is not clear whether the return value should be ok or partial (see below).
  3. Table 53 says that the function returns codecvt_base::partial if "not all source characters converted." With the from pointers set up the same way as above, it is not clear whether the return value should be partial or ok (see above).
  4. Table 53, in the row describing the meaning of error mistakenly refers to a "from_type" character, without the symbol from_type having been defined. Most likely, the word "source" character is intended, although that is not sufficient. The functions may also fail when they encounter an invalid source sequence that cannot possibly form a valid source character (e.g., as explained in bullet 1 above).

Finally, the conditions described at the end of 22.2.1.4.2 [locale.codecvt.virtuals], p4 don't seem to be possible:

"A return value of partial, if (from_next == from_end), indicates that either the destination sequence has not absorbed all the available destination elements, or that additional source elements are needed before another destination element can be produced."

If the value is partial, it's not clear to me that (from_next ==from_end) could ever hold if there isn't enough room in the destination buffer. In order for (from_next==from_end) to hold, all characters in that range must have been successfully converted (according to 22.2.1.4.2 [locale.codecvt.virtuals], p2) and since there are no further source characters to convert, no more room in the destination buffer can be needed.

It's also not clear to me that (from_next==from_end) could ever hold if additional source elements are needed to produce another destination character (not element as incorrectly stated in the text). partial is returned if "not all source characters have been converted" according to Table 53, which also implies that (from_next==from) does NOT hold.

Could it be that the intended qualifying condition was actually (from_next != from_end), i.e., that the sentence was supposed to read

"A return value of partial, if (from_next != from_end),..."

which would make perfect sense, since, as far as I understand it, partial can only occur if (from_next != from_end)?

[Lillehammer: Defer for the moment, but this really needs to be fixed. Right now, the description of codecvt is too vague for it to be a useful contract between providers and clients of codecvt facets. (Note that both vendors and users can be both providers and clients of codecvt facets.) The major philosophical issue is whether the standard should only describe mappings that take a single wide character to multiple narrow characters (and vice versa), or whether it should describe fully general N-to-M conversions. When the original standard was written only the former was contemplated, but today, in light of the popularity of utf8 and utf16, that doesn't seem sufficient for C++0x. Bill supports general N-to-M conversions; we need to make sure Martin and Howard agree.]

Proposed resolution:


387. std::complex over-encapsulated

Section: 26.3 [complex.numbers] Status: Open Submitter: Gabriel Dos Reis Date: 2002-11-08

View other active issues in [complex.numbers].

View all other issues in [complex.numbers].

View all issues with Open status.

Discussion:

The absence of explicit description of std::complex<T> layout makes it imposible to reuse existing software developed in traditional languages like Fortran or C with unambigous and commonly accepted layout assumptions. There ought to be a way for practitioners to predict with confidence the layout of std::complex<T> whenever T is a numerical datatype. The absence of ways to access individual parts of a std::complex<T> object as lvalues unduly promotes severe pessimizations. For example, the only way to change, independently, the real and imaginary parts is to write something like

complex<T> z;
// ...
// set the real part to r
z = complex<T>(r, z.imag());
// ...
// set the imaginary part to i
z = complex<T>(z.real(), i);

At this point, it seems appropriate to recall that a complex number is, in effect, just a pair of numbers with no particular invariant to maintain. Existing practice in numerical computations has it that a complex number datatype is usually represented by Cartesian coordinates. Therefore the over-encapsulation put in the specification of std::complex<> is not justified.

Proposed resolution:

Add the following requirements to 26.3 [complex.numbers] as 26.3/4:

If z is an lvalue expression of type cv std::complex<T> then

Moreover, if a is an expression of pointer type cv complex<T>* and the expression a[i] is well-defined for an integer expression i then:

In the header synopsis in 26.3.1 [complex.synopsis], replace

  template<class T> T real(const complex<T>&);
  template<class T> T imag(const complex<T>&);

with

  template<class T> const T& real(const complex<T>&);
  template<class T>       T& real(      complex<T>&);
  template<class T> const T& imag(const complex<T>&);
  template<class T>       T& imag(      complex<T>&);

In 26.3.7 [complex.value.ops] paragraph 1, change

  template<class T> T real(const complex<T>&);

to

  template<class T> const T& real(const complex<T>&);
  template<class T>       T& real(      complex<T>&);

and change the Returns clause to "Returns: The real part of x.

In 26.3.7 [complex.value.ops] paragraph 2, change

  template<class T> T imag(const complex<T>&);

to

  template<class T> const T& imag(const complex<T>&);
  template<class T>       T& imag(      complex<T>&);

and change the Returns clause to "Returns: The imaginary part of x.

[Kona: The layout guarantee is absolutely necessary for C compatibility. However, there was disagreement about the other part of this proposal: retrieving elements of the complex number as lvalues. An alternative: continue to have real() and imag() return rvalues, but add set_real() and set_imag(). Straw poll: return lvalues - 2, add setter functions - 5. Related issue: do we want reinterpret_cast as the interface for converting a complex to an array of two reals, or do we want to provide a more explicit way of doing it? Howard will try to resolve this issue for the next meeting.]

[pre-Sydney: Howard summarized the options in n1589.]

Rationale:

The LWG believes that C99 compatibility would be enough justification for this change even without other considerations. All existing implementations already have the layout proposed here.


393. do_in/do_out operation on state unclear

Section: 22.2.1.4.2 [locale.codecvt.virtuals] Status: New Submitter: Alberto Barbati Date: 2002-12-24

View other active issues in [locale.codecvt.virtuals].

View all other issues in [locale.codecvt.virtuals].

View all issues with New status.

Discussion:

this DR follows the discussion on the previous thread "codecvt::do_in not consuming external characters". It's just a clarification issue and not a request for a change.

Can do_in()/do_out() produce output characters without consuming input characters as a result of operation on state?

Proposed resolution:

Add a note at the end of 22.2.1.5.2 [lib.locale.codecvt.virtuals], paragraph 3:

[Note: As a result of operations on state, it can return ok or partial and set from_next == from and to_next != to. --end note]

Rationale:

The submitter believes that standard already provides an affirmative answer to the question. However, the current wording has induced a few library implementors to make the incorrect assumption that do_in()/do_out() always consume at least one internal character when they succeed.

The submitter also believes that the proposed resolution is not in conflict with the related issue 76. Moreover, by explicitly allowing operations on state to produce characters, a codecvt implementation may effectively implement N-to-M translations without violating the "one character at a time" principle described in such issue. On a side note, the footnote in the proposed resolution of issue 76 that informally rules out N-to-M translations for basic_filebuf should be removed if this issue is accepted as valid.


394. behavior of formatted output on failure

Section: 27.6.2.6.1 [ostream.formatted.reqmts] Status: Open Submitter: Martin Sebor Date: 2002-12-27

View all issues with Open status.

Discussion:

There is a contradiction in Formatted output about what bit is supposed to be set if the formatting fails. On sentence says it's badbit and another that it's failbit.

27.6.2.5.1, p1 says in the Common Requirements on Formatted output functions:

     ... If the generation fails, then the formatted output function
     does setstate(ios::failbit), which might throw an exception.

27.6.2.5.2, p1 goes on to say this about Arithmetic Inserters:

... The formatting conversion occurs as if it performed the following code fragment:

     bool failed =
         use_facet<num_put<charT,ostreambuf_iterator<charT,traits>
         > >
         (getloc()).put(*this, *this, fill(), val). failed();

     ... If failed is true then does setstate(badbit) ...

The original intent of the text, according to Jerry Schwarz (see c++std-lib-10500), is captured in the following paragraph:

In general "badbit" should mean that the stream is unusable because of some underlying failure, such as disk full or socket closure; "failbit" should mean that the requested formatting wasn't possible because of some inconsistency such as negative widths. So typically if you clear badbit and try to output something else you'll fail again, but if you clear failbit and try to output something else you'll succeed.

In the case of the arithmetic inserters, since num_put cannot report failure by any means other than exceptions (in response to which the stream must set badbit, which prevents the kind of recoverable error reporting mentioned above), the only other detectable failure is if the iterator returned from num_put returns true from failed().

Since that can only happen (at least with the required iostream specializations) under such conditions as the underlying failure referred to above (e.g., disk full), setting badbit would seem to be the appropriate response (indeed, it is required in 27.6.2.5.2, p1). It follows that failbit can never be directly set by the arithmetic (it can only be set by the sentry object under some unspecified conditions).

The situation is different for other formatted output functions which can fail as a result of the streambuf functions failing (they may do so by means other than exceptions), and which are then required to set failbit.

The contradiction, then, is that ostream::operator<<(int) will set badbit if the disk is full, while operator<<(ostream&, char) will set failbit under the same conditions. To make the behavior consistent, the Common requirements sections for the Formatted output functions should be changed as proposed below.

[Kona: There's agreement that this is a real issue. What we decided at Kona: 1. An error from the buffer (which can be detected either directly from streambuf's member functions or by examining a streambuf_iterator) should always result in badbit getting set. 2. There should never be a circumstance where failbit gets set. That represents a formatting error, and there are no circumstances under which the output facets are specified as signaling a formatting error. (Even more so for string output that for numeric because there's nothing to format.) If we ever decide to make it possible for formatting errors to exist then the facets can signal the error directly, and that should go in clause 22, not clause 27. 3. The phrase "if generation fails" is unclear and should be eliminated. It's not clear whether it's intended to mean a buffer error (e.g. a full disk), a formatting error, or something else. Most people thought it was supposed to refer to buffer errors; if so, we should say so. Martin will provide wording.]

Proposed resolution:

Rationale:


396. what are characters zero and one

Section: 23.3.5.1 [bitset.cons] Status: Open Submitter: Martin Sebor Date: 2003-01-05

View all other issues in [bitset.cons].

View all issues with Open status.

Discussion:

23.3.5.1, p6 [lib.bitset.cons] talks about a generic character having the value of 0 or 1 but there is no definition of what that means for charT other than char and wchar_t. And even for those two types, the values 0 and 1 are not actually what is intended -- the values '0' and '1' are. This, along with the converse problem in the description of to_string() in 23.3.5.2, p33, looks like a defect remotely related to DR 303.

http://anubis.dkuug.dk/jtc1/sc22/wg21/docs/lwg-defects.html#303

23.3.5.1:
  -6-  An element of the constructed string has value zero if the
       corresponding character in str, beginning at position pos,
       is 0. Otherwise, the element has the value one.
    
23.3.5.2:
  -33-  Effects: Constructs a string object of the appropriate
        type and initializes it to a string of length N characters.
        Each character is determined by the value of its
        corresponding bit position in *this. Character position N
        ?- 1 corresponds to bit position zero. Subsequent decreasing
        character positions correspond to increasing bit positions.
        Bit value zero becomes the character 0, bit value one becomes
        the character 1.
    

Also note the typo in 23.3.5.1, p6: the object under construction is a bitset, not a string.

Proposed resolution:

Change the constructor's function declaration immediately before 23.3.5.1 [bitset.cons] p3 to:

    template <class charT, class traits, class Allocator>
    explicit
    bitset(const basic_string<charT, traits, Allocator>& str,
           typename basic_string<charT, traits, Allocator>::size_type pos = 0,
           typename basic_string<charT, traits, Allocator>::size_type n =
             basic_string<charT, traits, Allocator>::npos,
           charT zero = charT('0'), charT one = charT('1'))

Change the first two sentences of 23.3.5.1 [bitset.cons] p6 to: "An element of the constructed string has value 0 if the corresponding character in str, beginning at position pos, is zero. Otherwise, the element has the value 1.

Change the text of the second sentence in 23.3.5.1, p5 to read: "The function then throws invalid_argument if any of the rlen characters in str beginning at position pos is other than zero or one. The function uses traits::eq() to compare the character values."

Change the declaration of the to_string member function immediately before 23.3.5.2 [bitset.members] p33 to:

    template <class charT, class traits, class Allocator>
    basic_string<charT, traits, Allocator> 
    to_string(charT zero = charT('0'), charT one = charT('1')) const;

Change the last sentence of 23.3.5.2 [bitset.members] p33 to: "Bit value 0 becomes the character zero, bit value 1 becomes the character one.

Change 23.3.5.3 [bitset.operators] p8 to:

Returns:

  os << x.template to_string<charT,traits,allocator<charT> >(
      use_facet<ctype<charT> >(os.getloc()).widen('0'),
      use_facet<ctype<charT> >(os.getloc()).widen('1'));

Rationale:

There is a real problem here: we need the character values of '0' and '1', and we have no way to get them since strings don't have imbued locales. In principle the "right" solution would be to provide an extra object, either a ctype facet or a full locale, which would be used to widen '0' and '1'. However, there was some discomfort about using such a heavyweight mechanism. The proposed resolution allows those users who care about this issue to get it right.

We fix the inserter to use the new arguments. Note that we already fixed the analogous problem with the extractor in issue 303.


397. ostream::sentry dtor throws exceptions

Section: 27.6.2.4 [ostream::sentry] Status: Open Submitter: Martin Sebor Date: 2003-01-05

View other active issues in [ostream::sentry].

View all other issues in [ostream::sentry].

View all issues with Open status.

Discussion:

17.4.4.8, p3 prohibits library dtors from throwing exceptions.

27.6.2.3, p4 says this about the ostream::sentry dtor:

    -4- If ((os.flags() & ios_base::unitbuf) && !uncaught_exception())
        is true, calls os.flush().
    

27.6.2.6, p7 that describes ostream::flush() says:

    -7- If rdbuf() is not a null pointer, calls rdbuf()->pubsync().
        If that function returns ?-1 calls setstate(badbit) (which
        may throw ios_base::failure (27.4.4.3)).
    

That seems like a defect, since both pubsync() and setstate() can throw an exception.

[ The contradiction is real. Clause 17 says destructors may never throw exceptions, and clause 27 specifies a destructor that does throw. In principle we might change either one. We're leaning toward changing clause 17: putting in an "unless otherwise specified" clause, and then putting in a footnote saying the sentry destructor is the only one that can throw. PJP suggests specifying that sentry::~sentry() should internally catch any exceptions it might cause. ]

[ See 418 and 622 for related issues. ]

Proposed resolution:


398. effects of end-of-file on unformatted input functions

Section: 27.6.2.4 [ostream::sentry] Status: Open Submitter: Martin Sebor Date: 2003-01-05

View other active issues in [ostream::sentry].

View all other issues in [ostream::sentry].

View all issues with Open status.

Discussion:

While reviewing unformatted input member functions of istream for their behavior when they encounter end-of-file during input I found that the requirements vary, sometimes unexpectedly, and in more than one case even contradict established practice (GNU libstdc++ 3.2, IBM VAC++ 6.0, STLPort 4.5, SunPro 5.3, HP aCC 5.38, Rogue Wave libstd 3.1, and Classic Iostreams).

The following unformatted input member functions set eofbit if they encounter an end-of-file (this is the expected behavior, and also the behavior of all major implementations):

    basic_istream<charT, traits>&
    get (char_type*, streamsize, char_type);
    

Also sets failbit if it fails to extract any characters.

    basic_istream<charT, traits>&
    get (char_type*, streamsize);
    

Also sets failbit if it fails to extract any characters.

    basic_istream<charT, traits>&
    getline (char_type*, streamsize, char_type);
    

Also sets failbit if it fails to extract any characters.

    basic_istream<charT, traits>&
    getline (char_type*, streamsize);
    

Also sets failbit if it fails to extract any characters.

    basic_istream<charT, traits>&
    ignore (int, int_type);
    
    basic_istream<charT, traits>&
    read (char_type*, streamsize);
    

Also sets failbit if it encounters end-of-file.

    streamsize readsome (char_type*, streamsize);
    

The following unformated input member functions set failbit but not eofbit if they encounter an end-of-file (I find this odd since the functions make it impossible to distinguish a general failure from a failure due to end-of-file; the requirement is also in conflict with all major implementation which set both eofbit and failbit):

    int_type get();
    
    basic_istream<charT, traits>&
    get (char_type&);
    

These functions only set failbit of they extract no characters, otherwise they don't set any bits, even on failure (I find this inconsistency quite unexpected; the requirement is also in conflict with all major implementations which set eofbit whenever they encounter end-of-file):

    basic_istream<charT, traits>&
    get (basic_streambuf<charT, traits>&, char_type);
    
    basic_istream<charT, traits>&
    get (basic_streambuf<charT, traits>&);
    

This function sets no bits (all implementations except for STLport and Classic Iostreams set eofbit when they encounter end-of-file):

    int_type peek ();
    

Informally, what we want is a global statement of intent saying that eofbit gets set if we trip across EOF, and then we can take away the specific wording for individual functions. A full review is necessary. The wording currently in the standard is a mishmash, and changing it on an individual basis wouldn't make things better. Dietmar will do this work.

Proposed resolution:


401. incorrect type casts in table 32 in lib.allocator.requirements

Section: 20.1.2 [allocator.requirements] Status: Open Submitter: Markus Mauhart Date: 2003-02-27

View other active issues in [allocator.requirements].

View all other issues in [allocator.requirements].

View all issues with Open status.

Discussion:

I think that in par2 of [default.con.req] the last two lines of table 32 contain two incorrect type casts. The lines are ...

  a.construct(p,t)   Effect: new((void*)p) T(t)
  a.destroy(p)       Effect: ((T*)p)?->~T()

.... with the prerequisits coming from the preceding two paragraphs, especially from table 31:

  alloc<T>             a     ;// an allocator for T
  alloc<T>::pointer    p     ;// random access iterator
                              // (may be different from T*)
  alloc<T>::reference  r = *p;// T&
  T const&             t     ;

For that two type casts ("(void*)p" and "(T*)p") to be well-formed this would require then conversions to T* and void* for all alloc<T>::pointer, so it would implicitely introduce extra requirements for alloc<T>::pointer, additionally to the only current requirement (being a random access iterator).

Proposed resolution:

Change 20.1.2 [allocator.requirements], Table 35: Allocator requirements last column of the row describing "a.construct(p,t)" to:

Effect: ::new((void*)p) T(t) Constructs a copy of t at p. If t is an rvalue, it is forwarded to T's constructor as an rvalue, else it is forwarded as an lvalue.

Change 20.1.2 [allocator.requirements], Table 35: Allocator requirements last column of the row describing "a.destroy(p)" to:

Effect: ((T*)p)->~T() Destroys the object at p.

Note: Actually I would prefer to replace "((T*)p)?->dtor_name" with "p?->dtor_name", but AFAICS this is not possible cause of an omission in 13.5.6 [over.ref] (for which I have filed another DR on 29.11.2002).

[Kona: The LWG thinks this is somewhere on the border between Open and NAD. The intend is clear: construct constructs an object at the location p. It's reading too much into the description to think that literally calling new is required. Tweaking this description is low priority until we can do a thorough review of allocators, and, in particular, allocators with non-default pointer types.]

[ Batavia: Proposed resolution changed to less code and more description. ]

[ post Oxford: This would be rendered NAD Editorial by acceptance of N2257. ]


408. Is vector<reverse_iterator<char*> > forbidden?

Section: 24.1 [iterator.requirements] Status: Open Submitter: Nathan Myers Date: 2003-06-03

View other active issues in [iterator.requirements].

View all other issues in [iterator.requirements].

View all issues with Open status.

Discussion:

I've been discussing iterator semantics with Dave Abrahams, and a surprise has popped up. I don't think this has been discussed before.

24.1 [iterator.requirements] says that the only operation that can be performed on "singular" iterator values is to assign a non-singular value to them. (It doesn't say they can be destroyed, and that's probably a defect.) Some implementations have taken this to imply that there is no need to initialize the data member of a reverse_iterator<> in the default constructor. As a result, code like

  std::vector<std::reverse_iterator<char*> > v(7);
  v.reserve(1000);

invokes undefined behavior, because it must default-initialize the vector elements, and then copy them to other storage. Of course many other vector operations on these adapters are also left undefined, and which those are is not reliably deducible from the standard.

I don't think that 24.1 was meant to make standard-library iterator types unsafe. Rather, it was meant to restrict what operations may be performed by functions which take general user- and standard iterators as arguments, so that raw pointers would qualify as iterators. However, this is not clear in the text, others have come to the opposite conclusion.

One question is whether the standard iterator adaptors have defined copy semantics. Another is whether they have defined destructor semantics: is

  { std::vector<std::reverse_iterator<char*> >  v(7); }

undefined too?

Note this is not a question of whether algorithms are allowed to rely on copy semantics for arbitrary iterators, just whether the types we actually supply support those operations. I believe the resolution must be expressed in terms of the semantics of the adapter's argument type. It should make clear that, e.g., the reverse_iterator<T> constructor is actually required to execute T(), and so copying is defined if the result of T() is copyable.

Issue 235, which defines reverse_iterator's default constructor more precisely, has some relevance to this issue. However, it is not the whole story.

The issue was whether

  reverse_iterator() { }

is allowed, vs.

  reverse_iterator() : current() { }

The difference is when T is char*, where the first leaves the member uninitialized, and possibly equal to an existing pointer value, or (on some targets) may result in a hardware trap when copied.

8.5 paragraph 5 seems to make clear that the second is required to satisfy DR 235, at least for non-class Iterator argument types.

But that only takes care of reverse_iterator, and doesn't establish a policy for all iterators. (The reverse iterator adapter was just an example.) In particular, does my function

  template <typename Iterator>
    void f() { std::vector<Iterator>  v(7); } 

evoke undefined behavior for some conforming iterator definitions? I think it does, now, because vector<> will destroy those singular iterator values, and that's explicitly disallowed.

24.1 shouldn't give blanket permission to copy all singular iterators, because then pointers wouldn't qualify as iterators. However, it should allow copying of that subset of singular iterator values that are default-initialized, and it should explicitly allow destroying any iterator value, singular or not, default-initialized or not.

Related issue: 407

[ We don't want to require all singular iterators to be copyable, because that is not the case for pointers. However, default construction may be a special case. Issue: is it really default construction we want to talk about, or is it something like value initialization? We need to check with core to see whether default constructed pointers are required to be copyable; if not, it would be wrong to impose so strict a requirement for iterators. ]

Proposed resolution:


417. what does ctype::do_widen() return on failure

Section: 22.2.1.1.2 [locale.ctype.virtuals] Status: Open Submitter: Martin Sebor Date: 2003-09-18

View all other issues in [locale.ctype.virtuals].

View all issues with Open status.

Discussion:

The Effects and Returns clauses of the do_widen() member function of the ctype facet fail to specify the behavior of the function on failure. That the function may not be able to simply cast the narrow character argument to the type of the result since doing so may yield the wrong value for some wchar_t encodings. Popular implementations of ctype<wchar_t> that use mbtowc() and UTF-8 as the native encoding (e.g., GNU glibc) will fail when the argument's MSB is set. There is no way for the the rest of locale and iostream to reliably detect this failure.

[Kona: This is a real problem. Widening can fail. It's unclear what the solution should be. Returning WEOF works for the wchar_t specialization, but not in general. One option might be to add a default, like narrow. But that's an incompatible change. Using traits::eof might seem like a good idea, but facets don't have access to traits (a recurring problem). We could have widen throw an exception, but that's a scary option; existing library components aren't written with the assumption that widen can throw.]

Proposed resolution:


418. exceptions thrown during iostream cleanup

Section: 27.4.2.1.6 [ios::Init] Status: Open Submitter: Martin Sebor Date: 2003-09-18

View all issues with Open status.

Discussion:

The dtor of the ios_base::Init object is supposed to call flush() on the 6 standard iostream objects cout, cerr, clog, wcout, wcerr, and wclog. This call may cause an exception to be thrown.

17.4.4.8, p3 prohibits all library destructors from throwing exceptions.

The question is: What should this dtor do if one or more of these calls to flush() ends up throwing an exception? This can happen quite easily if one of the facets installed in the locale imbued in the iostream object throws.

[Kona: We probably can't do much better than what we've got, so the LWG is leaning toward NAD. At the point where the standard stream objects are being cleaned up, the usual error reporting mechanism are all unavailable. And exception from flush at this point will definitely cause problems. A quality implementation might reasonably swallow the exception, or call abort, or do something even more drastic.]

[ See 397 and 622 for related issues. ]

Proposed resolution:


419. istream extractors not setting failbit if eofbit is already set

Section: 27.6.1.1.3 [istream::sentry] Status: Open Submitter: Martin Sebor Date: 2003-09-18

View all other issues in [istream::sentry].

View all issues with Open status.

Discussion:

27.6.1.1.3 [istream::sentry], p2 says that istream::sentry ctor prepares for input if is.good() is true. p4 then goes on to say that the ctor sets the sentry::ok_ member to true if the stream state is good after any preparation. 27.6.1.2.1 [istream.formatted.reqmts], p1 then says that a formatted input function endeavors to obtain the requested input if the sentry's operator bool() returns true. Given these requirements, no formatted extractor should ever set failbit if the initial stream rdstate() == eofbit. That is contrary to the behavior of all implementations I tested. The program below prints out eof = 1, fail = 0 eof = 1, fail = 1 on all of them.


#include <sstream>
#include <cstdio>

int main()
{
    std::istringstream strm ("1");

    int i = 0;

    strm >> i;

    std::printf ("eof = %d, fail = %d\n",
                 !!strm.eof (), !!strm.fail ());

    strm >> i;

    std::printf ("eof = %d, fail = %d\n",
                 !!strm.eof (), !!strm.fail ());
}


Comments from Jerry Schwarz (c++std-lib-11373):
Jerry Schwarz wrote:
I don't know where (if anywhere) it says it in the standard, but the formatted extractors are supposed to set failbit if they don't extract any characters. If they didn't then simple loops like
while (cin >> x);
would loop forever.
Further comments from Martin Sebor:
The question is which part of the extraction should prevent this from happening by setting failbit when eofbit is already set. It could either be the sentry object or the extractor. It seems that most implementations have chosen to set failbit in the sentry [...] so that's the text that will need to be corrected.

Pre Berlin: This issue is related to 342. If the sentry sets failbit when it finds eofbit already set, then you can never seek away from the end of stream.

Kona: Possibly NAD. If eofbit is set then good() will return false. We then set ok to false. We believe that the sentry's constructor should always set failbit when ok is false, and we also think the standard already says that. Possibly it could be clearer.

Proposed resolution:

Change 27.6.1.1.3 [istream::sentry], p2 to:

explicit sentry(basic_istream<charT,traits>& is , bool noskipws = false);

-2- Effects: If is.good() is true false, calls is.setstate(failbit). Otherwise prepares for formatted or unformatted input. ...


421. is basic_streambuf copy-constructible?

Section: 27.5.2.1 [streambuf.cons] Status: Open Submitter: Martin Sebor Date: 2003-09-18

View all other issues in [streambuf.cons].

View all issues with Open status.

Discussion:

The reflector thread starting with c++std-lib-11346 notes that the class template basic_streambuf, along with basic_stringbuf and basic_filebuf, is copy-constructible but that the semantics of the copy constructors are not defined anywhere. Further, different implementations behave differently in this respect: some prevent copy construction of objects of these types by declaring their copy ctors and assignment operators private, others exhibit undefined behavior, while others still give these operations well-defined semantics.

Note that this problem doesn't seem to be isolated to just the three types mentioned above. A number of other types in the library section of the standard provide a compiler-generated copy ctor and assignment operator yet fail to specify their semantics. It's believed that the only types for which this is actually a problem (i.e. types where the compiler-generated default may be inappropriate and may not have been intended) are locale facets. See issue 439.

Proposed resolution:

27.5.2 [lib.streambuf]: Add into the synopsis, public section, just above the destructor declaration:

basic_streambuf(const basic_streambuf& sb);
basic_streambuf& operator=(const basic_streambuf& sb);

Insert after 27.5.2.1, paragraph 2:

basic_streambuf(const basic_streambuf& sb);

Constructs a copy of sb.

Postcondtions:

                eback() == sb.eback()
                gptr()  == sb.gptr()
                egptr() == sb.egptr()
                pbase() == sb.pbase()
                pptr()  == sb.pptr()
                epptr() == sb.epptr()
                getloc() == sb.getloc()
basic_streambuf& operator=(const basic_streambuf& sb);

Assigns the data members of sb to this.

Postcondtions:

                eback() == sb.eback()
                gptr()  == sb.gptr()
                egptr() == sb.egptr()
                pbase() == sb.pbase()
                pptr()  == sb.pptr()
                epptr() == sb.epptr()
                getloc() == sb.getloc()

Returns: *this.

27.7.1 [lib.stringbuf]:

Option A:

Insert into the basic_stringbuf synopsis in the private section:

basic_stringbuf(const basic_stringbuf&);             // not defined
basic_stringbuf& operator=(const basic_stringbuf&);  // not defined

Option B:

Insert into the basic_stringbuf synopsis in the public section:

basic_stringbuf(const basic_stringbuf& sb);
basic_stringbuf& operator=(const basic_stringbuf& sb);

27.7.1.1, insert after paragraph 4:

basic_stringbuf(const basic_stringbuf& sb);

Constructs an independent copy of sb as if with sb.str(), and with the openmode that sb was constructed with.

Postcondtions:

               str() == sb.str()
               gptr()  - eback() == sb.gptr()  - sb.eback()
               egptr() - eback() == sb.egptr() - sb.eback()
               pptr()  - pbase() == sb.pptr()  - sb.pbase()
               getloc() == sb.getloc()

Note: The only requirement on epptr() is that it point beyond the initialized range if an output sequence exists. There is no requirement that epptr() - pbase() == sb.epptr() - sb.pbase().

basic_stringbuf& operator=(const basic_stringbuf& sb);

After assignment the basic_stringbuf has the same state as if it were initially copy constructed from sb, except that the basic_stringbuf is allowed to retain any excess capacity it might have, which may in turn effect the value of epptr().

27.8.1.1 [lib.filebuf]

Insert at the bottom of the basic_filebuf synopsis:

private:
  basic_filebuf(const basic_filebuf&);             // not defined
  basic_filebuf& operator=(const basic_filebuf&);  // not defined

[Kona: this is an issue for basic_streambuf itself and for its derived classes. We are leaning toward allowing basic_streambuf to be copyable, and specifying its precise semantics. (Probably the obvious: copying the buffer pointers.) We are less sure whether the streambuf derived classes should be copyable. Howard will write up a proposal.]

[Sydney: Dietmar presented a new argument against basic_streambuf being copyable: it can lead to an encapsulation violation. Filebuf inherits from streambuf. Now suppose you inhert a my_hijacking_buf from streambuf. You can copy the streambuf portion of a filebuf to a my_hijacking_buf, giving you access to the pointers into the filebuf's internal buffer. Perhaps not a very strong argument, but it was strong enough to make people nervous. There was weak preference for having streambuf not be copyable. There was weak preference for having stringbuf not be copyable even if streambuf is. Move this issue to open for now. ]

[ 2007-01-12, Howard: Rvalue Reference Recommendations for Chapter 27 recommends protected copy constructor and assignment for basic_streambuf with the same semantics as would be generated by the compiler. These members aid in derived classes implementing move semantics. A protected copy constructor and copy assignment operator do not expose encapsulation more so than it is today as each data member of a basic_streambuf is already both readable and writable by derived classes via various get/set protected member functions (eback(), setp(), etc.). Rather a protected copy constructor and copy assignment operator simply make the job of derived classes implementing move semantics less tedious and error prone. ]

Rationale:

27.5.2 [lib.streambuf]: The proposed basic_streambuf copy constructor and assignment operator are the same as currently implied by the lack of declarations: public and simply copies the data members. This resolution is not a change but a clarification of the current standard.

27.7.1 [lib.stringbuf]: There are two reasonable options: A) Make basic_stringbuf not copyable. This is likely the status-quo of current implementations. B) Reasonable copy semantics of basic_stringbuf can be defined and implemented. A copyable basic_streambuf is arguably more useful than a non-copyable one. This should be considered as new functionality and not the fixing of a defect. If option B is chosen, ramifications from issue 432 are taken into account.

27.8.1.1 [lib.filebuf]: There are no reasonable copy semantics for basic_filebuf.


423. effects of negative streamsize in iostreams

Section: 27 [input.output] Status: Open Submitter: Martin Sebor Date: 2003-09-18

View all other issues in [input.output].

View all issues with Open status.

Discussion:

A third party test suite tries to exercise istream::ignore(N) with a negative value of N and expects that the implementation will treat N as if it were 0. Our implementation asserts that (N >= 0) holds and aborts the test.

I can't find anything in section 27 that prohibits such values but I don't see what the effects of such calls should be, either (this applies to a number of unformatted input functions as well as some member functions of the basic_streambuf template).

Proposed resolution:

I propose that we add to each function in clause 27 that takes an argument, say N, of type streamsize a Requires clause saying that "N >= 0." The intent is to allow negative streamsize values in calls to precision() and width() but disallow it in calls to streambuf::sgetn(), istream::ignore(), or ostream::write().

[Kona: The LWG agreed that this is probably what we want. However, we need a review to find all places where functions in clause 27 take arguments of type streamsize that shouldn't be allowed to go negative. Martin will do that review.]


424. normative notes

Section: 17.3.1.1 [structure.summary] Status: Open Submitter: Martin Sebor Date: 2003-09-18

View all issues with Open status.

Discussion:

The text in 17.3.1.1, p1 says:
"Paragraphs labelled "Note(s):" or "Example(s):" are informative, other paragraphs are normative."
The library section makes heavy use of paragraphs labeled "Notes(s)," some of which are clearly intended to be normative (see list 1), while some others are not (see list 2). There are also those where the intent is not so clear (see list 3).

List 1 -- Examples of (presumably) normative Notes:
20.6.1.1 [allocator.members], p3,
20.6.1.1 [allocator.members], p10,
21.3.2 [string.cons], p11,
22.1.1.2 [locale.cons], p11,
23.2.2.3 [deque.modifiers], p2,
25.3.7 [alg.min.max], p3,
26.3.6 [complex.ops], p15,
27.5.2.4.3 [streambuf.virt.get], p7.

List 2 -- Examples of (presumably) informative Notes:
18.5.1.3 [new.delete.placement], p3,
21.3.6.6 [string::replace], p14,
22.2.1.4.2 [locale.codecvt.virtuals], p3,
25.1.1 [alg.foreach], p4,
26.3.5 [complex.member.ops], p1,
27.4.2.5 [ios.base.storage], p6.

List 3 -- Examples of Notes that are not clearly either normative or informative:
22.1.1.2 [locale.cons], p8,
22.1.1.5 [locale.statics], p6,
27.5.2.4.5 [streambuf.virt.put], p4.

None of these lists is meant to be exhaustive.

[Definitely a real problem. The big problem is there's material that doesn't quite fit any of the named paragraph categories (e.g. Effects). Either we need a new kind of named paragraph, or we need to put more material in unnamed paragraphs jsut after the signature. We need to talk to the Project Editor about how to do this. ]

Proposed resolution:

[Pete: I changed the paragraphs marked "Note" and "Notes" to use "Remark" and "Remarks". Fixed as editorial. This change has been in the WD since the post-Redmond mailing, in 2004. Recommend NAD.]

[ Batavia: We feel that the references in List 2 above should be changed from Remarks to Notes. We also feel that those items in List 3 need to be double checked for the same change. Alan and Pete to review. ]


427. stage 2 and rationale of DR 221

Section: 22.2.2.1.2 [facet.num.get.virtuals] Status: Open Submitter: Martin Sebor Date: 2003-09-18

View other active issues in [facet.num.get.virtuals].

View all other issues in [facet.num.get.virtuals].

View all issues with Open status.

Discussion:

The requirements specified in Stage 2 and reiterated in the rationale of DR 221 (and echoed again in DR 303) specify that num_get<charT>:: do_get() compares characters on the stream against the widened elements of "012...abc...ABCX+-"

An implementation is required to allow programs to instantiate the num_get template on any charT that satisfies the requirements on a user-defined character type. These requirements do not include the ability of the character type to be equality comparable (the char_traits template must be used to perform tests for equality). Hence, the num_get template cannot be implemented to support any arbitrary character type. The num_get template must either make the assumption that the character type is equality-comparable (as some popular implementations do), or it may use char_traits<charT> to do the comparisons (some other popular implementations do that). This diversity of approaches makes it difficult to write portable programs that attempt to instantiate the num_get template on user-defined types.

[Kona: the heart of the problem is that we're theoretically supposed to use traits classes for all fundamental character operations like assignment and comparison, but facets don't have traits parameters. This is a fundamental design flaw and it appears all over the place, not just in this one place. It's not clear what the correct solution is, but a thorough review of facets and traits is in order. The LWG considered and rejected the possibility of changing numeric facets to use narrowing instead of widening. This may be a good idea for other reasons (see issue 459), but it doesn't solve the problem raised by this issue. Whether we use widen or narrow the num_get facet still has no idea which traits class the user wants to use for the comparison, because only streams, not facets, are passed traits classes. The standard does not require that two different traits classes with the same char_type must necessarily have the same behavior.]

Informally, one possibility: require that some of the basic character operations, such as eq, lt, and assign, must behave the same way for all traits classes with the same char_type. If we accept that limitation on traits classes, then the facet could reasonably be required to use char_traits<charT>.

Proposed resolution:


430. valarray subset operations

Section: 26.5.2.4 [valarray.sub] Status: Open Submitter: Martin Sebor Date: 2003-09-18

View all issues with Open status.

Discussion:

The standard fails to specify the behavior of valarray::operator[](slice) and other valarray subset operations when they are passed an "invalid" slice object, i.e., either a slice that doesn't make sense at all (e.g., slice (0, 1, 0) or one that doesn't specify a valid subset of the valarray object (e.g., slice (2, 1, 1) for a valarray of size 1).

[Kona: the LWG believes that invalid slices should invoke undefined behavior. Valarrays are supposed to be designed for high performance, so we don't want to require specific checking. We need wording to express this decision.]

Proposed resolution:


431. Swapping containers with unequal allocators

Section: 20.1.2 [allocator.requirements], 25 [algorithms] Status: Open Submitter: Matt Austern Date: 2003-09-20

View other active issues in [allocator.requirements].

View all other issues in [allocator.requirements].

View all issues with Open status.

Discussion:

Clause 20.1.2 [allocator.requirements] paragraph 4 says that implementations are permitted to supply containers that are unable to cope with allocator instances and that container implementations may assume that all instances of an allocator type compare equal. We gave implementers this latitude as a temporary hack, and eventually we want to get rid of it. What happens when we're dealing with allocators that don't compare equal?

In particular: suppose that v1 and v2 are both objects of type vector<int, my_alloc> and that v1.get_allocator() != v2.get_allocator(). What happens if we write v1.swap(v2)? Informally, three possibilities:

1. This operation is illegal. Perhaps we could say that an implementation is required to check and to throw an exception, or perhaps we could say it's undefined behavior.

2. The operation performs a slow swap (i.e. using three invocations of operator=, leaving each allocator with its original container. This would be an O(N) operation.

3. The operation swaps both the vectors' contents and their allocators. This would be an O(1) operation. That is:

    my_alloc a1(...);
    my_alloc a2(...);
    assert(a1 != a2);

    vector<int, my_alloc> v1(a1);
    vector<int, my_alloc> v2(a2);
    assert(a1 == v1.get_allocator());
    assert(a2 == v2.get_allocator());

    v1.swap(v2);
    assert(a1 == v2.get_allocator());
    assert(a2 == v1.get_allocator());
  

[Kona: This is part of a general problem. We need a paper saying how to deal with unequal allocators in general.]

[pre-Sydney: Howard argues for option 3 in N1599. ]

[ 2007-01-12, Howard: This issue will now tend to come up more often with move constructors and move assignment operators. For containers, these members transfer resources (i.e. the allocated memory) just like swap. ]

[ Batavia: There is agreement to overload the container swap on the allocator's Swappable requirement using concepts. If the allocator supports Swappable, then container's swap will swap allocators, else it will perform a "slow swap" using copy construction and copy assignment. ]

Proposed resolution:


446. Iterator equality between different containers

Section: 24.1 [iterator.requirements], 23.1 [container.requirements] Status: Open Submitter: Andy Koenig Date: 2003-12-16

View other active issues in [iterator.requirements].

View all other issues in [iterator.requirements].

View all issues with Open status.

Discussion:

What requirements does the standard place on equality comparisons between iterators that refer to elements of different containers. For example, if v1 and v2 are empty vectors, is v1.end() == v2.end() allowed to yield true? Is it allowed to throw an exception?

The standard appears to be silent on both questions.

[Sydney: The intention is that comparing two iterators from different containers is undefined, but it's not clear if we say that, or even whether it's something we should be saying in clause 23 or in clause 24. Intuitively we might want to say that equality is defined only if one iterator is reachable from another, but figuring out how to say it in any sensible way is a bit tricky: reachability is defined in terms of equality, so we can't also define equality in terms of reachability. ]

Proposed resolution:


454. basic_filebuf::open should accept wchar_t names

Section: 27.8.1.4 [filebuf.members] Status: Open Submitter: Bill Plauger Date: 2004-01-30

View other active issues in [filebuf.members].

View all other issues in [filebuf.members].

View all issues with Open status.

Discussion:

    basic_filebuf *basic_filebuf::open(const char *, ios_base::open_mode);

should be supplemented with the overload:

    basic_filebuf *basic_filebuf::open(const wchar_t *, ios_base::open_mode);

Depending on the operating system, one of these forms is fundamental and the other requires an implementation-defined mapping to determine the actual filename.

[Sydney: Yes, we want to allow wchar_t filenames. Bill will provide wording.]

[ In Toronto we noted that this is issue 5 from N1569. ]

How does this interact with the newly-defined character types, and how do we avoid interface explosion considering std::string overloads that were added? Propose another solution that is different than the suggestion proposed by PJP.

Suggestion is to make a member template function for basic_string (for char, wchar_t, u16char, u32char instantiations), and then just keep a const char* member.

Goal is to do implicit conversion between character string literals to appropriate basic_string type. Not quite sure if this is possible.

Implementors are free to add specific overloads for non-char character types.

Proposed resolution:

Change from:

basic_filebuf<charT,traits>* open(
	const char* s,
	ios_base::openmode mode );

Effects: If is_open() != false, returns a null pointer. Otherwise, initializes the filebuf as required. It then opens a file, if possible, whose name is the NTBS s ("as if" by calling std::fopen(s,modstr)).

to:

basic_filebuf<charT,traits>* open(
	const char* s,
	ios_base::openmode mode );

basic_filebuf<charT,traits>* open(
	const wchar_t* ws,
	ios_base::openmode mode );

Effects: If is_open() != false, returns a null pointer. Otherwise, initializes the filebuf as required. It then opens a file, if possible, whose name is the NTBS s ("as if" by calling std::fopen(s,modstr)). For the second signature, the NTBS s is determined from the WCBS ws in an implementation-defined manner.

(NOTE: For a system that "naturally" represents a filename as a WCBS, the NTBS s in the first signature may instead be mapped to a WCBS; if so, it follows the same mapping rules as the first argument to open.)

Rationale:

Slightly controversial, but by a 7-1 straw poll the LWG agreed to move this to Ready. The controversy was because the mapping between wide names and files in a filesystem is implementation defined. The counterargument, which most but not all LWG members accepted, is that the mapping between narrow files names and files is also implemenation defined.

[Lillehammer: Moved back to "open" status, at Beman's urging. (1) Why just basic_filebuf, instead of also basic_fstream (and possibly other things too). (2) Why not also constructors that take std::basic_string? (3) We might want to wait until we see Beman's filesystem library; we might decide that it obviates this.]


458. 24.1.5 contains unintented limitation for operator-

Section: 24.1.5 [random.access.iterators] Status: Open Submitter: Daniel Frey Date: 2004-02-27

View all other issues in [random.access.iterators].

View all issues with Open status.

Discussion:

In 24.1.5 [lib.random.access.iterators], table 76 the operational semantics for the expression "r -= n" are defined as "return r += -n". This means, that the expression -n must be valid, which is not the case for unsigned types.

[ Sydney: Possibly not a real problem, since difference type is required to be a signed integer type. However, the wording in the standard may be less clear than we would like. ]

Proposed resolution:

To remove this limitation, I suggest to change the operational semantics for this column to:

    { Distance m = n; 
      if (m >= 0) 
        while (m--) --r; 
      else 
        while (m++) ++r;
      return r; }

459. Requirement for widening in stage 2 is overspecification

Section: 22.2.2.1.2 [facet.num.get.virtuals] Status: Open Submitter: Martin Sebor Date: 2004-03-16

View other active issues in [facet.num.get.virtuals].

View all other issues in [facet.num.get.virtuals].

View all issues with Open status.

Discussion:

When parsing strings of wide-character digits, the standard requires the library to widen narrow-character "atoms" and compare the widened atoms against the characters that are being parsed. Simply narrowing the wide characters would be far simpler, and probably more efficient. The two choices are equivalent except in convoluted test cases, and many implementations already ignore the standard and use narrow instead of widen.

First, I disagree that using narrow() instead of widen() would necessarily have unfortunate performance implications. A possible implementation of narrow() that allows num_get to be implemented in a much simpler and arguably comparably efficient way as calling widen() allows, i.e. without making a virtual call to do_narrow every time, is as follows:

  inline char ctype<wchar_t>::narrow (wchar_t wc, char dflt) const
  {
      const unsigned wi = unsigned (wc);

      if (wi > UCHAR_MAX)
          return typeid (*this) == typeid (ctype<wchar_t>) ?
                 dflt : do_narrow (wc, dflt);

      if (narrow_ [wi] < 0) {
         const char nc = do_narrow (wc, dflt);
         if (nc == dflt)
             return dflt;
         narrow_ [wi] = nc;
      }

      return char (narrow_ [wi]);
  }

Second, I don't think the change proposed in the issue (i.e., to use narrow() instead of widen() during Stage 2) would be at all drastic. Existing implementations with the exception of libstdc++ currently already use narrow() so the impact of the change on programs would presumably be isolated to just a single implementation. Further, since narrow() is not required to translate alternate wide digit representations such as those mentioned in issue 303 to their narrow equivalents (i.e., the portable source characters '0' through '9'), the change does not necessarily imply that these alternate digits would be treated as ordinary digits and accepted as part of numbers during parsing. In fact, the requirement in 22.2.1.1.2 [locale.ctype.virtuals], p13 forbids narrow() to translate an alternate digit character, wc, to an ordinary digit in the basic source character set unless the expression (ctype<charT>::is(ctype_base::digit, wc) == true) holds. This in turn is prohibited by the C standard (7.25.2.1.5, 7.25.2.1.5, and 5.2.1, respectively) for charT of either char or wchar_t.

[Sydney: To a large extent this is a nonproblem. As long as you're only trafficking in char and wchar_t we're only dealing with a stable character set, so you don't really need either 'widen' or 'narrow': can just use literals. Finally, it's not even clear whether widen-vs-narrow is the right question; arguably we should be using codecvt instead.]

Proposed resolution:

Change stage 2 so that implementations are permitted to use either technique to perform the comparison:

  1. call widen on the atoms and compare (either by using operator== or char_traits<charT>::eq) the input with the widened atoms, or
  2. call narrow on the input and compare the narrow input with the atoms
  3. do (1) or (2) only if charT is not char or wchar_t, respectively; i.e., avoid calling widen or narrow if it the source and destination types are the same

462. Destroying objects with static storage duration

Section: 3.6.3 [basic.start.term], 18.3 [cstdint] Status: Open Submitter: Bill Plauger Date: 2004-03-23

View all issues with Open status.

Discussion:

3.6.3 Termination spells out in detail the interleaving of static destructor calls and calls to functions registered with atexit. To match this behavior requires intimate cooperation between the code that calls destructors and the exit/atexit machinery. The former is tied tightly to the compiler; the latter is a primitive mechanism inherited from C that traditionally has nothing to do with static construction and destruction. The benefits of intermixing destructor calls with atexit handler calls is questionable at best, and very difficult to get right, particularly when mixing third-party C++ libraries with different third-party C++ compilers and C libraries supplied by still other parties.

I believe the right thing to do is defer all static destruction until after all atexit handlers are called. This is a change in behavior, but one that is likely visible only to perverse test suites. At the very least, we should permit deferred destruction even if we don't require it.

[If this is to be changed, it should probably be changed by CWG. At this point, however, the LWG is leaning toward NAD. Implementing what the standard says is hard work, but it's not impossible and most vendors went through that pain years ago. Changing this behavior would be a user-visible change, and would break at least one real application.]

[ Batavia: Send to core with our recommendation that we should permit deferred destruction but not require it. ]

[ Howard: The course of action recommended in Batavia would undo LWG issue 3 and break current code implementing the "phoenix singleton". Search the net for "phoenix singleton atexit" to get a feel for the size of the adverse impact this change would have. Below is sample code which implements the phoenix singleton and would break if atexit is changed in this way: ]

#include <cstdlib>
#include <iostream>
#include <type_traits>
#include <new>

class A
{
    bool alive_;
    A(const A&);
    A& operator=(const A&);
public:
    A() : alive_(true) {std::cout << "A()\n";}
    ~A() {alive_ = false; std::cout << "~A()\n";}
    void use()
    {
        if (alive_)
            std::cout << "A is alive\n";
        else
            std::cout << "A is dead\n";
    }
};

void deallocate_resource();

// This is the phoenix singleton pattern
A& get_resource(bool create = true)
{
    static std::aligned_storage<sizeof(A), std::alignment_of<A>::value>::type buf;
    static A* a;
    if (create)
    {
        if (a != (A*)&buf)
        {
            a = ::new (&buf) A;
            std::atexit(deallocate_resource);
        }
    }
    else
    {
        a->~A();
        a = (A*)&buf + 1;
    }
    return *a;
}

void deallocate_resource()
{
    get_resource(false);
}

void use_A(const char* message)
{
    A& a = get_resource();
    std::cout << "Using A " << message << "\n";
    a.use();
}

struct B
{
    ~B() {use_A("from ~B()");}
};

B b;

int main()
{
    use_A("from main()");
}

The correct output is:

A()
Using A from main()
A is alive
~A()
A()
Using A from ~B()
A is alive
~A()

Proposed resolution:


471. result of what() implementation-defined

Section: 18.6.1 [type.info] Status: Open Submitter: Martin Sebor Date: 2004-06-28

View all other issues in [type.info].

View all issues with Open status.

Discussion:

[lib.exception] specifies the following:

    exception (const exception&) throw();
    exception& operator= (const exception&) throw();

    -4- Effects: Copies an exception object.
    -5- Notes: The effects of calling what() after assignment
        are implementation-defined.

First, does the Note only apply to the assignment operator? If so, what are the effects of calling what() on a copy of an object? Is the returned pointer supposed to point to an identical copy of the NTBS returned by what() called on the original object or not?

Second, is this Note intended to extend to all the derived classes in section 19? I.e., does the standard provide any guarantee for the effects of what() called on a copy of any of the derived class described in section 19?

Finally, if the answer to the first question is no, I believe it constitutes a defect since throwing an exception object typically implies invoking the copy ctor on the object. If the answer is yes, then I believe the standard ought to be clarified to spell out exactly what the effects are on the copy (i.e., after the copy ctor was called).

[Redmond: Yes, this is fuzzy. The issue of derived classes is fuzzy too.]

[ Batavia: Howard provided wording. ]

Proposed resolution:

Change 18.7.1 [exception] to:

exception(const exception& e) throw();
exception& operator=(const exception& e) throw();

-4- Effects: Copies an exception object.

-5- Remarks: The effects of calling what() after assignment are implementation-defined.

-5- Throws: Nothing. This also applies to all standard library-defined classes that derive from exception.

-7- Postcondition: strcmp(what(), e.what()) == 0. This also applies to all standard library-defined classes that derive from exception.


473. underspecified ctype calls

Section: 22.2.1.1 [locale.ctype] Status: Open Submitter: Martin Sebor Date: 2004-07-01

View all issues with Open status.

Discussion:

Most ctype member functions come in two forms: one that operates on a single character at a time and another form that operates on a range of characters. Both forms are typically described by a single Effects and/or Returns clause.

The Returns clause of each of the single-character non-virtual forms suggests that the function calls the corresponding single character virtual function, and that the array form calls the corresponding virtual array form. Neither of the two forms of each virtual member function is required to be implemented in terms of the other.

There are three problems:

1. One is that while the standard does suggest that each non-virtual member function calls the corresponding form of the virtual function, it doesn't actually explicitly require it.

Implementations that cache results from some of the virtual member functions for some or all values of their arguments might want to call the array form from the non-array form the first time to fill the cache and avoid any or most subsequent virtual calls. Programs that rely on each form of the virtual function being called from the corresponding non-virtual function will see unexpected behavior when using such implementations.

2. The second problem is that either form of each of the virtual functions can be overridden by a user-defined function in a derived class to return a value that is different from the one produced by the virtual function of the alternate form that has not been overriden.

Thus, it might be possible for, say, ctype::widen(c) to return one value, while for ctype::widen(&c, &c + 1, &wc) to set wc to another value. This is almost certainly not intended. Both forms of every function should be required to return the same result for the same character, otherwise the same program using an implementation that calls one form of the functions will behave differently than when using another implementation that calls the other form of the function "under the hood."

3. The last problem is that the standard text fails to specify whether one form of any of the virtual functions is permitted to be implemented in terms of the other form or not, and if so, whether it is required or permitted to call the overridden virtual function or not.

Thus, a program that overrides one of the virtual functions so that it calls the other form which then calls the base member might end up in an infinite loop if the called form of the base implementation of the function in turn calls the other form.

Lillehammer: Part of this isn't a real problem. We already talk about caching. 22.1.1/6 But part is a real problem. ctype virtuals may call each other, so users don't know which ones to override to avoid avoid infinite loops.

This is a problem for all facet virtuals, not just ctype virtuals, so we probably want a blanket statement in clause 22 for all facets. The LWG is leaning toward a blanket prohibition, that a facet's virtuals may never call each other. We might want to do that in clause 27 too, for that matter. A review is necessary. Bill will provide wording.

Proposed resolution:


484. Convertible to T

Section: 24.1.1 [input.iterators] Status: Open Submitter: Chris Jefferson Date: 2004-09-16

View all other issues in [input.iterators].

View all issues with Open status.

Discussion:

From comp.std.c++:

I note that given an input iterator a for type T, then *a only has to be "convertable to T", not actually of type T.

Firstly, I can't seem to find an exact definition of "convertable to T". While I assume it is the obvious definition (an implicit conversion), I can't find an exact definition. Is there one?

Slightly more worryingly, there doesn't seem to be any restriction on the this type, other than it is "convertable to T". Consider two input iterators a and b. I would personally assume that most people would expect *a==*b would perform T(*a)==T(*b), however it doesn't seem that the standard requires that, and that whatever type *a is (call it U) could have == defined on it with totally different symantics and still be a valid inputer iterator.

Is this a correct reading? When using input iterators should I write T(*a) all over the place to be sure that the object i'm using is the class I expect?

This is especially a nuisance for operations that are defined to be "convertible to bool". (This is probably allowed so that implementations could return say an int and avoid an unnessary conversion. However all implementations I have seen simply return a bool anyway. Typical implemtations of STL algorithms just write things like while(a!=b && *a!=0). But strictly speaking, there are lots of types that are convertible to T but that also overload the appropriate operators so this doesn't behave as expected.

If we want to make code like this legal (which most people seem to expect), then we'll need to tighten up what we mean by "convertible to T".

[Lillehammer: The first part is NAD, since "convertible" is well-defined in core. The second part is basically about pathological overloads. It's a minor problem but a real one. So leave open for now, hope we solve it as part of iterator redesign.]

Proposed resolution:


485. output iterator insufficently constrained

Section: 24.1.2 [output.iterators] Status: Open Submitter: Chris Jefferson Date: 2004-10-13

View all other issues in [output.iterators].

View all issues with Open status.

Discussion:

The note on 24.1.2 Output iterators insufficently limits what can be performed on output iterators. While it requires that each iterator is progressed through only once and that each iterator is written to only once, it does not require the following things:

Note: Here it is assumed that x is an output iterator of type X which has not yet been assigned to.

a) That each value of the output iterator is written to: The standard allows: ++x; ++x; ++x;

b) That assignments to the output iterator are made in order X a(x); ++a; *a=1; *x=2; is allowed

c) Chains of output iterators cannot be constructed: X a(x); ++a; X b(a); ++b; X c(b); ++c; is allowed, and under the current wording (I believe) x,a,b,c could be written to in any order.

I do not believe this was the intension of the standard?

[Lillehammer: Real issue. There are lots of constraints we intended but didn't specify. Should be solved as part of iterator redesign.]

Proposed resolution:


488. rotate throws away useful information

Section: 25.2.11 [alg.rotate] Status: Ready Submitter: Howard Hinnant Date: 2004-11-22

View all issues with Ready status.

Discussion:

rotate takes 3 iterators: first, middle and last which point into a sequence, and rearranges the sequence such that the subrange [middle, last) is now at the beginning of the sequence and the subrange [first, middle) follows. The return type is void.

In many use cases of rotate, the client needs to know where the subrange [first, middle) starts after the rotate is performed. This might look like:

  rotate(first, middle, last);
  Iterator i = advance(first, distance(middle, last));

Unless the iterators are random access, the computation to find the start of the subrange [first, middle) has linear complexity. However, it is not difficult for rotate to return this information with negligible additional computation expense. So the client could code:

  Iterator i = rotate(first, middle, last);

and the resulting program becomes significantly more efficient.

While the backwards compatibility hit with this change is not zero, it is very small (similar to that of lwg 130), and there is a significant benefit to the change.

Proposed resolution:

In 25 [algorithms] p2, change:

  template<class ForwardIterator>
    void ForwardIterator rotate(ForwardIterator first, ForwardIterator middle,
                ForwardIterator last);

In 25.2.11 [alg.rotate], change:

  template<class ForwardIterator>
    void ForwardIterator rotate(ForwardIterator first, ForwardIterator middle,
                ForwardIterator last);

In 25.2.11 [alg.rotate] insert a new paragraph after p1:

Returns: first + (last - middle).

[ The LWG agrees with this idea, but has one quibble: we want to make sure not to give the impression that the function "advance" is actually called, just that the nth iterator is returned. (Calling advance is observable behavior, since users can specialize it for their own iterators.) Howard will provide wording. ]

[Howard provided wording for mid-meeting-mailing Jun. 2005.]

[ Toronto: moved to Ready. ]


492. Invalid iterator arithmetic expressions

Section: 23 [containers], 24 [iterators], 25 [algorithms] Status: Open Submitter: Thomas Mang Date: 2004-12-12

View all other issues in [containers].

View all issues with Open status.

Discussion:

Various clauses other than clause 25 make use of iterator arithmetic not supported by the iterator category in question. Algorithms in clause 25 are exceptional because of 25 [lib.algorithms], paragraph 9, but this paragraph does not provide semantics to the expression "iterator - n", where n denotes a value of a distance type between iterators.

1) Examples of current wording:

Current wording outside clause 25:

23.2.2.4 [lib.list.ops], paragraphs 19-21: "first + 1", "(i - 1)", "(last - first)" 23.3.1.1 [lib.map.cons], paragraph 4: "last - first" 23.3.2.1 [lib.multimap.cons], paragraph 4: "last - first" 23.3.3.1 [lib.set.cons], paragraph 4: "last - first" 23.3.4.1 [lib.multiset.cons], paragraph 4: "last - first" 24.4.1 [lib.reverse.iterators], paragraph 1: "(i - 1)"

[Important note: The list is not complete, just an illustration. The same issue might well apply to other paragraphs not listed here.]

None of these expressions is valid for the corresponding iterator category.

Current wording in clause 25:

25.1.1 [lib.alg.foreach], paragraph 1: "last - 1" 25.1.3 [lib.alg.find.end], paragraph 2: "[first1, last1 - (last2-first2))" 25.2.8 [lib.alg.unique], paragraph 1: "(i - 1)" 25.2.8 [lib.alg.unique], paragraph 5: "(i - 1)"

However, current wording of 25 [lib.algorithms], paragraph 9 covers neither of these four cases:

Current wording of 25 [lib.algorithms], paragraph 9:

"In the description of the algorithms operator + and - are used for some of the iterator categories for which they do not have to be defined. In these cases the semantics of a+n is the same as that of

{X tmp = a;
advance(tmp, n);
return tmp;
}

and that of b-a is the same as of return distance(a, b)"

This paragrpah does not take the expression "iterator - n" into account, where n denotes a value of a distance type between two iterators [Note: According to current wording, the expression "iterator - n" would be resolved as equivalent to "return distance(n, iterator)"]. Even if the expression "iterator - n" were to be reinterpreted as equivalent to "iterator + -n" [Note: This would imply that "a" and "b" were interpreted implicitly as values of iterator types, and "n" as value of a distance type], then 24.3.4/2 interfers because it says: "Requires: n may be negative only for random access and bidirectional iterators.", and none of the paragraphs quoted above requires the iterators on which the algorithms operate to be of random access or bidirectional category.

2) Description of intended behavior:

For the rest of this Defect Report, it is assumed that the expression "iterator1 + n" and "iterator1 - iterator2" has the semantics as described in current 25 [lib.algorithms], paragraph 9, but applying to all clauses. The expression "iterator1 - n" is equivalent to an result-iterator for which the expression "result-iterator + n" yields an iterator denoting the same position as iterator1 does. The terms "iterator1", "iterator2" and "result-iterator" shall denote the value of an iterator type, and the term "n" shall denote a value of a distance type between two iterators.

All implementations known to the author of this Defect Report comply with these assumptions. No impact on current code is expected.

3) Proposed fixes:

Change 25 [lib.algorithms], paragraph 9 to:

"In the description of the algorithms operator + and - are used for some of the iterator categories for which they do not have to be defined. In this paragraph, a and b denote values of an iterator type, and n denotes a value of a distance type between two iterators. In these cases the semantics of a+n is the same as that of

{X tmp = a;
advance(tmp, n);
return tmp;
}

,the semantics of a-n denotes the value of an iterator i for which the following condition holds: advance(i, n) == a, and that of b-a is the same as of return distance(a, b)".

Comments to the new wording:

a) The wording " In this paragraph, a and b denote values of an iterator type, and n denotes a value of a distance type between two iterators." was added so the expressions "b-a" and "a-n" are distinguished regarding the types of the values on which they operate. b) The wording ",the semantics of a-n denotes the value of an iterator i for which the following condition holds: advance(i, n) == a" was added to cover the expression 'iterator - n'. The wording "advance(i, n) == a" was used to avoid a dependency on the semantics of a+n, as the wording "i + n == a" would have implied. However, such a dependency might well be deserved. c) DR 225 is not considered in the new wording.

Proposed fixes regarding invalid iterator arithmetic expressions outside clause 25:

Either a) Move modified 25 [lib.algorithms], paragraph 9 (as proposed above) before any current invalid iterator arithmetic expression. In that case, the first sentence of 25 [lib.algorithms], paragraph 9, need also to be modified and could read: "For the rest of this International Standard, ...." / "In the description of the following clauses including this ...." / "In the description of the text below ..." etc. - anyways substituting the wording "algorithms", which is a straight reference to clause 25. In that case, 25 [lib.algorithms] paragraph 9 will certainly become obsolete. Alternatively, b) Add an appropiate paragraph similar to resolved 25 [lib.algorithms], paragraph 9, to the beginning of each clause containing invalid iterator arithmetic expressions. Alternatively, c) Fix each paragraph (both current wording and possible resolutions of DRs) containing invalid iterator arithmetic expressions separately.

5) References to other DRs:

See DR 225. See DR 237. The resolution could then also read "Linear in last - first".

Proposed resolution:

[Lillehammer: Minor issue, but real. We have a blanket statement about this in 25/11. But (a) it should be in 17, not 25; and (b) it's not quite broad enough, because there are some arithmetic expressions it doesn't cover. Bill will provide wording.]


498. Requirements for partition() and stable_partition() too strong

Section: 25.2.13 [alg.partitions] Status: Open Submitter: Sean Parent, Joe Gottman Date: 2005-05-04

View all issues with Open status.

Discussion:

Problem: The iterator requirements for partition() and stable_partition() [25.2.12] are listed as BidirectionalIterator, however, there are efficient algorithms for these functions that only require ForwardIterator that have been known since before the standard existed. The SGI implementation includes these (see http://www.sgi.com/tech/stl/partition.html and http://www.sgi.com/tech/stl/stable_partition.html).

Proposed resolution:

Change 25.2.12 from

template<class BidirectionalIterator, class Predicate> 
BidirectionalIterator partition(BidirectionalIterato r first, 
                                BidirectionalIterator last, 
                                Predicate pred); 

to

template<class ForwardIterator, class Predicate> 
ForwardIterator partition(ForwardIterator first, 
                          ForwardIterator last, 
                          Predicate pred); 

Change the complexity from

At most (last - first)/2 swaps are done. Exactly (last - first) applications of the predicate are done.

to

If ForwardIterator is a bidirectional_iterator, at most (last - first)/2 swaps are done; otherwise at most (last - first) swaps are done. Exactly (last - first) applications of the predicate are done.

Rationale:

Partition is a "foundation" algorithm useful in many contexts (like sorting as just one example) - my motivation for extending it to include forward iterators is slist - without this extension you can't partition an slist (without writing your own partition). Holes like this in the standard library weaken the argument for generic programming (ideally I'd be able to provide a library that would refine std::partition() to other concepts without fear of conflicting with other libraries doing the same - but that is a digression). I consider the fact that partition isn't defined to work for ForwardIterator a minor embarrassment.

[Mont Tremblant: Moved to Open, request motivation and use cases by next meeting. Sean provided further rationale by post-meeting mailing.]


502. Proposition: Clarification of the interaction between a facet and an iterator

Section: 22.1.1.1.1 [locale.category] Status: Open Submitter: Christopher Conrade Zseleghovski Date: 2005-06-07

View all other issues in [locale.category].

View all issues with Open status.

Discussion:

Motivation:

This requirement seems obvious to me, it is the essence of code modularity. I have complained to Mr. Plauger that the Dinkumware library does not observe this principle but he objected that this behaviour is not covered in the standard.

Proposed resolution:

Append the following point to 22.1.1.1.1:

6. The implementation of a facet of Table 52 parametrized with an InputIterator/OutputIterator should use that iterator only as character source/sink respectively. For a *_get facet, it means that the value received depends only on the sequence of input characters and not on how they are accessed. For a *_put facet, it means that the sequence of characters output depends only on the value to be formatted and not of how the characters are stored.

[ Berlin: Moved to Open, Need to clean up this area to make it clear locales don't have to contain open ended sets of facets. Jack, Howard, Bill. ]


503. more on locales

Section: 22.2 [locale.categories] Status: Open Submitter: P.J. Plauger Date: 2005-06-20

View other active issues in [locale.categories].

View all other issues in [locale.categories].

View all issues with Open status.

Discussion:

a) In 22.2.1.1 para. 2 we refer to "the instantiations required in Table 51" to refer to the facet *objects* associated with a locale. And we almost certainly mean just those associated with the default or "C" locale. Otherwise, you can't switch to a locale that enforces a different mapping between narrow and wide characters, or that defines additional uppercase characters.

b) 22.2.1.5 para. 3 (codecvt) has the same issues.

c) 22.2.1.5.2 (do_unshift) is even worse. It *forbids* the generation of a homing sequence for the basic character set, which might very well need one.

d) 22.2.1.5.2 (do_length) likewise dictates that the default mapping between wide and narrow characters be taken as one-for-one.

e) 22.2.2 para. 2 (num_get/put) is both muddled and vacuous, as far as I can tell. The muddle is, as before, calling Table 51 a list of instantiations. But the constraint it applies seems to me to cover *all* defined uses of num_get/put, so why bother to say so?

f) 22.2.3.1.2 para. 1(do_decimal_point) says "The required instantiations return '.' or L'.'.) Presumably this means "as appropriate for the character type. But given the vague definition of "required" earlier, this overrules *any* change of decimal point for non "C" locales. Surely we don't want to do that.

g) 22.2.3.1.2 para. 2 (do_thousands_sep) says "The required instantiations return ',' or L','.) As above, this probably means "as appropriate for the character type. But this overrules the "C" locale, which requires *no* character ('\0') for the thousands separator. Even if we agree that we don't mean to block changes in decimal point or thousands separator, we should also eliminate this clear incompatibility with C.

h) 22.2.3.1.2 para. 2 (do_grouping) says "The required instantiations return the empty string, indicating no grouping." Same considerations as for do_decimal_point.

i) 22.2.4.1 para. 1 (collate) refers to "instantiations required in Table 51". Same bad jargon.

j) 22.2.4.1.2 para. 1 (do_compare) refers to "instantiations required in Table 51". Same bad jargon.

k) 22.2.5 para. 1 (time_get/put) uses the same muddled and vacuous as num_get/put.

l) 22.2.6 para. 2 (money_get/put) uses the same muddled and vacuous as num_get/put.

m) 22.2.6.3.2 (do_pos/neg_format) says "The instantiations required in Table 51 ... return an object of type pattern initialized to {symbol, sign, none, value}." This once again *overrides* the "C" locale, as well as any other locale."

3) We constrain the use_facet calls that can be made by num_get/put, so why don't we do the same for money_get/put? Or for any of the other facets, for that matter?

4) As an almost aside, we spell out when a facet needs to use the ctype facet, but several also need to use a codecvt facet and we don't say so.

[ Berlin: Bill to provide wording. ]

Proposed resolution:


518. Are insert and erase stable for unordered_multiset and unordered_multimap?

Section: 23.1.3 [unord.req], TR1 6.3.1 [tr.unord.req] Status: Review Submitter: Matt Austern Date: 2005-07-03

View other active issues in [unord.req].

View all other issues in [unord.req].

Discussion:

Issue 371 deals with stability of multiset/multimap under insert and erase (i.e. do they preserve the relative order in ranges of equal elements). The same issue applies to unordered_multiset and unordered_multimap.

[ Moved to open (from review): There is no resolution. ]

[ Toronto: We have a resolution now. Moved to Review. Some concern was noted as to whether this conflicted with existing practice or not. An additional concern was in specifying (partial) ordering for an unordered container. ]

Proposed resolution:

Wording for the proposed resolution is taken from the equivalent text for associative containers.

Change 23.1.3 [unord.req], Unordered associative containers, paragraph 6 to:

An unordered associative container supports unique keys if it may contain at most one element for each key. Otherwise, it supports equivalent keys. unordered_set and unordered_map support unique keys. unordered_multiset and unordered_multimap support equivalent keys. In containers that support equivalent keys, elements with equivalent keys are adjacent to each other. For unordered_multiset and unordered_multimap, insert and erase preserve the relative ordering of equivalent elements.

Change 23.1.3 [unord.req], Unordered associative containers, paragraph 8 to:

The elements of an unordered associative container are organized into buckets. Keys with the same hash code appear in the same bucket. The number of buckets is automatically increased as elements are added to an unordered associative container, so that the average number of elements per bucket is kept below a bound. Rehashing invalidates iterators, changes ordering between elements, and changes which buckets elements appear in, but does not invalidate pointers or references to elements. For unordered_multiset and unordered_multimap, rehashing preserves the relative ordering of equivalent elements.


522. Tuple doesn't define swap

Section: 20.3 [tuple], TR1 6.1 [tr.tuple] Status: Open Submitter: Andy Koenig Date: 2005-07-03

View all issues with Open status.

Discussion:

Tuple doesn't define swap(). It should.

[ Berlin: Doug to provide wording. ]

[ Batavia: Howard to provide wording. ]

[ Toronto: Howard to provide wording (really this time). ]

Proposed resolution:


523. regex case-insensitive character ranges are unimplementable as specified

Section: 28 [re] Status: Open Submitter: Eric Niebler Date: 2005-07-01

View other active issues in [re].

View all other issues in [re].

View all issues with Open status.

Discussion:

A problem with TR1 regex is currently being discussed on the Boost developers list. It involves the handling of case-insensitive matching of character ranges such as [Z-a]. The proper behavior (according to the ECMAScript standard) is unimplementable given the current specification of the TR1 regex_traits<> class template. John Maddock, the author of the TR1 regex proposal, agrees there is a problem. The full discussion can be found at http://lists.boost.org/boost/2005/06/28850.php (first message copied below). We don't have any recommendations as yet.

-- Begin original message --

The situation of interest is described in the ECMAScript specification (ECMA-262), section 15.10.2.15:

"Even if the pattern ignores case, the case of the two ends of a range is significant in determining which characters belong to the range. Thus, for example, the pattern /[E-F]/i matches only the letters E, F, e, and f, while the pattern /[E-f]/i matches all upper and lower-case ASCII letters as well as the symbols [, \, ], ^, _, and `."

A more interesting case is what should happen when doing a case-insentitive match on a range such as [Z-a]. It should match z, Z, a, A and the symbols [, \, ], ^, _, and `. This is not what happens with Boost.Regex (it throws an exception from the regex constructor).

The tough pill to swallow is that, given the specification in TR1, I don't think there is any effective way to handle this situation. According to the spec, case-insensitivity is handled with regex_traits<>::translate_nocase(CharT) -- two characters are equivalent if they compare equal after both are sent through the translate_nocase function. But I don't see any way of using this translation function to make character ranges case-insensitive. Consider the difficulty of detecting whether "z" is in the range [Z-a]. Applying the transformation to "z" has no effect (it is essentially std::tolower). And we're not allowed to apply the transformation to the ends of the range, because as ECMA-262 says, "the case of the two ends of a range is significant."

So AFAICT, TR1 regex is just broken, as is Boost.Regex. One possible fix is to redefine translate_nocase to return a string_type containing all the characters that should compare equal to the specified character. But this function is hard to implement for Unicode, and it doesn't play nice with the existing ctype facet. What a mess!

-- End original message --

[ John Maddock adds: ]

One small correction, I have since found that ICU's regex package does implement this correctly, using a similar mechanism to the current TR1.Regex.

Given an expression [c1-c2] that is compiled as case insensitive it:

Enumerates every character in the range c1 to c2 and converts it to it's case folded equivalent. That case folded character is then used a key to a table of equivalence classes, and each member of the class is added to the list of possible matches supported by the character-class. This second step isn't possible with our current traits class design, but isn't necessary if the input text is also converted to a case-folded equivalent on the fly.

ICU applies similar brute force mechanisms to character classes such as [[:lower:]] and [[:word:]], however these are at least cached, so the impact is less noticeable in this case.

Quick and dirty performance comparisons show that expressions such as "[X-\\x{fff0}]+" are indeed very slow to compile with ICU (about 200 times slower than a "normal" expression). For an application that uses a lot of regexes this could have a noticeable performance impact. ICU also has an advantage in that it knows the range of valid characters codes: code points outside that range are assumed not to require enumeration, as they can not be part of any equivalence class. I presume that if we want the TR1.Regex to work with arbitrarily large character sets enumeration really does become impractical.

Finally note that Unicode has:

Three cases (upper, lower and title). One to many, and many to one case transformations. Character that have context sensitive case translations - for example an uppercase sigma has two different lowercase forms - the form chosen depends on context(is it end of a word or not), a caseless match for an upper case sigma should match either of the lower case forms, which is why case folding is often approximated by tolower(toupper(c)).

Probably we need some way to enumerate character equivalence classes, including digraphs (either as a result or an input), and some way to tell whether the next character pair is a valid digraph in the current locale.

Hoping this doesn't make this even more complex that it was already,

[ Portland: Alisdair: Detect as invalid, throw an exception. Pete: Possible general problem with case insensitive ranges. ]

Proposed resolution:


524. regex named character classes and case-insensitivity don't mix

Section: 28 [re] Status: Open Submitter: Eric Niebler Date: 2005-07-01

View other active issues in [re].

View all other issues in [re].

View all issues with Open status.

Discussion:

This defect is also being discussed on the Boost developers list. The full discussion can be found here: http://lists.boost.org/boost/2005/07/29546.php

-- Begin original message --

Also, I may have found another issue, closely related to the one under discussion. It regards case-insensitive matching of named character classes. The regex_traits<> provides two functions for working with named char classes: lookup_classname and isctype. To match a char class such as [[:alpha:]], you pass "alpha" to lookup_classname and get a bitmask. Later, you pass a char and the bitmask to isctype and get a bool yes/no answer.

But how does case-insensitivity work in this scenario? Suppose we're doing a case-insensitive match on [[:lower:]]. It should behave as if it were [[:lower:][:upper:]], right? But there doesn't seem to be enough smarts in the regex_traits interface to do this.

Imagine I write a traits class which recognizes [[:fubar:]], and the "fubar" char class happens to be case-sensitive. How is the regex engine to know that? And how should it do a case-insensitive match of a character against the [[:fubar:]] char class? John, can you confirm this is a legitimate problem?

I see two options:

1) Add a bool icase parameter to lookup_classname. Then, lookup_classname( "upper", true ) will know to return lower|upper instead of just upper.

2) Add a isctype_nocase function

I prefer (1) because the extra computation happens at the time the pattern is compiled rather than when it is executed.

-- End original message --

For what it's worth, John has also expressed his preference for option (1) above.

Proposed resolution:


527. tr1::bind has lost its Throws clause

Section: 20.5.10.1.3 [func.bind.bind], TR1 3.6.3 [tr.func.bind.bind] Status: Open Submitter: Peter Dimov Date: 2005-10-01

View all issues with Open status.

Discussion:

The original bind proposal gives the guarantee that tr1::bind(f, t1, ..., tN) does not throw when the copy constructors of f, t1, ..., tN don't.

This guarantee is not present in the final version of TR1.

I'm pretty certain that we never removed it on purpose. Editorial omission? :-)

[ Berlin: not quite editorial, needs proposed wording. ]

[ Batavia: Doug to translate wording to variadic templates. ]

[ Toronto: We agree but aren't quite happy with the wording. The "t"'s no longer refer to anything. Alan to provide improved wording. ]

Proposed resolution:

In 20.5.10.1.3 [lib.func.bind.bind] ([tr.func.bind.bind]), add a new paragraph after p2:

Throws: Nothing unless one of the copy constructors of f, t1, t2, ..., tN throws an exception.

Add a new paragraph after p4:

Throws: nothing unless one of the copy constructors of f, t1, t2, ..., tN throws an exception.


529. The standard encourages redundant and confusing preconditions

Section: 17.4.3.8 [res.on.required] Status: Open Submitter: David Abrahams Date: 2005-10-25

View all issues with Open status.

Discussion:

17.4.3.8/1 says:

Violation of the preconditions specified in a function's Required behavior: paragraph results in undefined behavior unless the function's Throws: paragraph specifies throwing an exception when the precondition is violated.

This implies that a precondition violation can lead to defined behavior. That conflicts with the only reasonable definition of precondition: that a violation leads to undefined behavior. Any other definition muddies the waters when it comes to analyzing program correctness, because precondition violations may be routinely done in correct code (e.g. you can use std::vector::at with the full expectation that you'll get an exception when your index is out of range, catch the exception, and continue). Not only is it a bad example to set, but it encourages needless complication and redundancy in the standard. For example:

  21 Strings library 
  21.3.3 basic_string capacity

  void resize(size_type n, charT c);

  5 Requires: n <= max_size()
  6 Throws: length_error if n > max_size().
  7 Effects: Alters the length of the string designated by *this as follows:

The Requires clause is entirely redundant and can be dropped. We could make that simplifying change (and many others like it) even without changing 17.4.3.8/1; the wording there just seems to encourage the redundant and error-prone Requires: clause.

[ Batavia: Alan and Pete to work. ]

Proposed resolution:

1. Change 17.4.3.8/1 to read:

Violation of the preconditions specified in a function's Required behavior: paragraph results in undefined behavior unless the function's Throws: paragraph specifies throwing an exception when the precondition is violated.

2. Go through and remove redundant Requires: clauses. Specifics to be provided by Dave A.

[ Berlin: The LWG requests a detailed survey of part 2 of the proposed resolution. ]

[ Alan provided the survey N2121. ]


539. partial_sum and adjacent_difference should mention requirements

Section: 26.6.3 [partial.sum] Status: Open Submitter: Marc Schoolderman Date: 2006-02-06

View all issues with Open status.

Discussion:

There are some problems in the definition of partial_sum and adjacent_difference in 26.4 [lib.numeric.ops]

Unlike accumulate and inner_product, these functions are not parametrized on a "type T", instead, 26.4.3 [lib.partial.sum] simply specifies the effects clause as;

Assigns to every element referred to by iterator i in the range [result,result + (last - first)) a value correspondingly equal to

((...(* first + *( first + 1)) + ...) + *( first + ( i - result )))

And similarly for BinaryOperation. Using just this definition, it seems logical to expect that:

char i_array[4] = { 100, 100, 100, 100 };
int  o_array[4];

std::partial_sum(i_array, i_array+4, o_array);

Is equivalent to

int o_array[4] = { 100, 100+100, 100+100+100, 100+100+100+100 };

i.e. 100, 200, 300, 400, with addition happening in the result type, int.

Yet all implementations I have tested produce 100, -56, 44, -112, because they are using an accumulator of the InputIterator's value_type, which in this case is char, not int.

The issue becomes more noticeable when the result of the expression *i + *(i+1) or binary_op(*i, *i-1) can't be converted to the value_type. In a contrived example:

enum not_int { x = 1, y = 2 };
...
not_int e_array[4] = { x, x, y, y };
std::partial_sum(e_array, e_array+4, o_array);

Is it the intent that the operations happen in the input type, or in the result type?

If the intent is that operations happen in the result type, something like this should be added to the "Requires" clause of 26.4.3/4 [lib.partial.sum]:

The type of *i + *(i+1) or binary_op(*i, *(i+1)) shall meet the requirements of CopyConstructible (20.1.3) and Assignable (23.1) types.

(As also required for T in 26.4.1 [lib.accumulate] and 26.4.2 [lib.inner.product].)

The "auto initializer" feature proposed in N1894 is not required to implement partial_sum this way. The 'narrowing' behaviour can still be obtained by using the std::plus<> function object.

If the intent is that operations happen in the input type, then something like this should be added instead;

The type of *first shall meet the requirements of CopyConstructible (20.1.3) and Assignable (23.1) types. The result of *i + *(i+1) or binary_op(*i, *(i+1)) shall be convertible to this type.

The 'widening' behaviour can then be obtained by writing a custom proxy iterator, which is somewhat involved.

In both cases, the semantics should probably be clarified.

26.4.4 [lib.adjacent.difference] is similarly underspecified, although all implementations seem to perform operations in the 'result' type:

unsigned char i_array[4] = { 4, 3, 2, 1 };
int o_array[4];

std::adjacent_difference(i_array, i_array+4, o_array);

o_array is 4, -1, -1, -1 as expected, not 4, 255, 255, 255.

In any case, adjacent_difference doesn't mention the requirements on the value_type; it can be brought in line with the rest of 26.4 [lib.numeric.ops] by adding the following to 26.4.4/2 [lib.adjacent.difference]:

The type of *first shall meet the requirements of CopyConstructible (20.1.3) and Assignable (23.1) types."

[ Berlin: Giving output iterator's value_types very controversial. Suggestion of adding signatures to allow user to specify "accumulator". ]

Proposed resolution:


546. _Longlong and _ULonglong are integer types

Section: TR1 5.1.1 [tr.rand.req] Status: New Submitter: Matt Austern Date: 2006-01-10

View all issues with New status.

Discussion:

The TR sneaks in two new integer types, _Longlong and _Ulonglong, in [tr.c99]. The rest of the TR should use that type. I believe this affects two places. First, the random number requirements, 5.1.1/10-11, lists all of the types with which template parameters named IntType and UIntType may be instantiated. _Longlong (or "long long", assuming it is added to C++0x) should be added to the IntType list, and UIntType (again, or "unsigned long long") should be added to the UIntType list. Second, 6.3.2 lists the types for which hash<> is required to be instantiable. _Longlong and _Ulonglong should be added to that list, so that people may use long long as a hash key.

Proposed resolution:


548. May random_device block?

Section: 26.4.6 [rand.device], TR1 5.1.6 [tr.rand.device] Status: Open Submitter: Matt Austern Date: 2006-01-10

View all issues with Open status.

Discussion:

Class random_device "produces non-deterministic random numbers", using some external source of entropy. In most real-world systems, the amount of available entropy is limited. Suppose that entropy has been exhausted. What is an implementation permitted to do? In particular, is it permitted to block indefinitely until more random bits are available, or is the implementation required to detect failure immediately? This is not an academic question. On Linux a straightforward implementation would read from /dev/random, and "When the entropy pool is empty, reads to /dev/random will block until additional environmental noise is gathered." Programmers need to know whether random_device is permitted to (or possibly even required to?) behave the same way.

[ Berlin: Walter: N1932 considers this NAD. Does the standard specify whether std::cin may block? ]

Proposed resolution:


550. What should the return type of pow(float,int) be?

Section: 26.7 [c.math] Status: New Submitter: Howard Hinnant Date: 2006-01-12

View all other issues in [c.math].

View all issues with New status.

Discussion:

Assuming we adopt the C compatibility package from C99 what should be the return type of the following signature be:

?  pow(float, int);

C++03 says that the return type should be float. TR1 and C90/99 say the return type should be double. This can put clients into a situation where C++03 provides answers that are not as high quality as C90/C99/TR1. For example:

#include <math.h>

int main()
{
    float x = 2080703.375F;
    double y = pow(x, 2);
}

Assuming an IEEE 32 bit float and IEEE 64 bit double, C90/C99/TR1 all suggest:

y = 4329326534736.390625

which is exactly right. While C++98/C++03 demands:

y = 4329326510080.

which is only approximately right.

I recommend that C++0X adopt the mixed mode arithmetic already adopted by Fortran, C and TR1 and make the return type of pow(float,int) be double.

Proposed resolution:


552. random_shuffle and its generator

Section: 25.2.12 [alg.random.shuffle] Status: New Submitter: Martin Sebor Date: 2006-01-25

View all issues with New status.

Discussion:

...is specified to shuffle its range by calling swap but not how (or even that) it's supposed to use the RandomNumberGenerator argument passed to it.

Shouldn't we require that the generator object actually be used by the algorithm to obtain a series of random numbers and specify how many times its operator() should be invoked by the algorithm?

Proposed resolution:


556. is Compare a BinaryPredicate?

Section: 25.3 [alg.sorting] Status: Open Submitter: Martin Sebor Date: 2006-02-05

View all other issues in [alg.sorting].

View all issues with Open status.

Discussion:

In 25, p8 we allow BinaryPredicates to return a type that's convertible to bool but need not actually be bool. That allows predicates to return things like proxies and requires that implementations be careful about what kinds of expressions they use the result of the predicate in (e.g., the expression in if (!pred(a, b)) need not be well-formed since the negation operator may be inaccessible or return a type that's not convertible to bool).

Here's the text for reference:

...if an algorithm takes BinaryPredicate binary_pred as its argument and first1 and first2 as its iterator arguments, it should work correctly in the construct if (binary_pred(*first1, first2)){...}.

In 25.3, p2 we require that the Compare function object return true of false, which would seem to preclude such proxies. The relevant text is here:

Compare is used as a function object which returns true if the first argument is less than the second, and false otherwise...

Proposed resolution:

I think we could fix this by rewording 25.3, p2 to read somthing like:

-2- Compare is used as a function object which returns true if the first argument a BinaryPredicate. The return value of the function call operator applied to an object of type Compare, when converted to type bool, yields true if the first argument of the call is less than the second, and false otherwise. Compare comp is used throughout for algorithms assuming an ordering relation. It is assumed that comp will not apply any non-constant function through the dereferenced iterator.

[ Portland: Jack to define "convertible to bool" such that short circuiting isn't destroyed. ]


557. TR1: div(_Longlong, _Longlong) vs div(intmax_t, intmax_t)

Section: 18.3 [cstdint], TR1 8.22 [tr.c99.cstdint] Status: Open Submitter: Paolo Carlini Date: 2006-02-06

View all other issues in [cstdint].

View all issues with Open status.

Discussion:

I'm seeing a problem with such overloads: when, _Longlong == intmax_t == long long we end up, essentially, with the same arguments and different return types (lldiv_t and imaxdiv_t, respectively). Similar issue with abs(_Longlong) and abs(intmax_t), of course.

Comparing sections 8.25 and 8.11, I see an important difference, however: 8.25.3 and 8.25.4 carefully describe div and abs for _Longlong types (rightfully, because not moved over directly from C99), whereas there is no equivalent in 8.11: the abs and div overloads for intmax_t types appear only in the synopsis and are not described anywhere, in particular no mention in 8.11.2 (at variance with 8.25.2).

I'm wondering whether we really, really, want div and abs for intmax_t...

Proposed resolution:

[ Portland: no consensus. ]

Rationale:

[ Batavia, Bill: The <cstdint> synopsis in TR1 8.11.1 [tr.c99.cinttypes.syn] contains: ]

intmax_t imaxabs(intmax_t i);
intmax_t abs(intmax_t i);

imaxdiv_t imaxdiv(intmax_t numer, intmax_t denom);
imaxdiv_t div(intmax_t numer, intmax_t denom);

[ and in TR1 8.11.2 [tr.c99.cinttypes.def]: ]

The header defines all functions, types, and macros the same as C99 subclause 7.8.

[ This is as much definition as we give for most other C99 functions, so nothing need change. We might, however, choose to add the footnote: ]

[Note: These overloads for abs and div may well be equivalent to those that take long long arguments. If so, the implementation is responsible for avoiding conflicting declarations. -- end note]


561. inserter overly generic

Section: 24.4.2.6.5 [inserter] Status: New Submitter: Howard Hinnant Date: 2006-02-21

View all issues with New status.

Discussion:

The declaration of std::inserter is:

template <class Container, class Iterator>
insert_iterator<Container>
inserter(Container& x, Iterator i);

The template parameter Iterator in this function is completely unrelated to the template parameter Container when it doesn't need to be. This causes the code to be overly generic. That is, any type at all can be deduced as Iterator, whether or not it makes sense. Now the same is true of Container. However, for every free (unconstrained) template parameter one has in a signature, the opportunity for a mistaken binding grows geometrically.

It would be much better if inserter had the following signature instead:

template <class Container>
insert_iterator<Container>
inserter(Container& x, typename Container::iterator i);

Now there is only one free template parameter. And the second argument to inserter must be implicitly convertible to the container's iterator, else the call will not be a viable overload (allowing other functions in the overload set to take precedence). Furthermore, the first parameter must have a nested type named iterator, or again the binding to std::inserter is not viable. Contrast this with the current situation where any type can bind to Container or Iterator and those types need not be anything closely related to containers or iterators.

This can adversely impact well written code. Consider:

#include <iterator>
#include <string>

namespace my
{

template <class String>
struct my_type {};

struct my_container
{
template <class String>
void push_back(const my_type<String>&);
};

template <class String>
void inserter(const my_type<String>& m, my_container& c) {c.push_back(m);}

}  // my

int main()
{
    my::my_container c;
    my::my_type<std::string> m;
    inserter(m, c);
}

Today this code fails because the call to inserter binds to std::inserter instead of to my::inserter. However with the proposed change std::inserter will no longer be a viable function which leaves only my::inserter in the overload resolution set. Everything works as the client intends.

To make matters a little more insidious, the above example works today if you simply change the first argument to an rvalue:

    inserter(my::my_type(), c);

It will also work if instantiated with some string type other than std::string (or any other std type). It will also work if <iterator> happens to not get included.

And it will fail again for such inocuous reaons as my_type or my_container privately deriving from any std type.

It seems unfortunate that such simple changes in the client's code can result in such radically differing behavior.

Proposed resolution:

Change 24.2:

24.2 Header <iterator> synopsis

...
template <class Container, class Iterator>
   insert_iterator<Container> inserter(Container& x, Iterator typename Container::iterator i);
...

Change 24.4.2.5:

24.4.2.5 Class template insert_iterator

...
template <class Container, class Iterator>
   insert_iterator<Container> inserter(Container& x, Iterator typename Container::iterator i);
...

Change 24.4.2.6.5:

24.4.2.6.5 inserter

template <class Container, class Inserter>
   insert_iterator<Container> inserter(Container& x, Inserter typename Container::iterator i);

-1- Returns: insert_iterator<Container>(x,typename Container::iterator(i)).


562. stringbuf ctor inefficient

Section: 27.7 [string.streams] Status: New Submitter: Martin Sebor Date: 2006-02-23

View all other issues in [string.streams].

View all issues with New status.

Discussion:

For better efficiency, the requirement on the stringbuf ctor that takes a string argument should be loosened up to let it set epptr() beyond just one past the last initialized character just like overflow() has been changed to be allowed to do (see issue 432). That way the first call to sputc() on an object won't necessarily cause a call to overflow. The corresponding change should be made to the string overload of the str() member function.

Proposed resolution:

Change 27.7.1.1, p3 of the Working Draft, N1804, as follows:

explicit basic_stringbuf(const basic_string<charT,traits,Allocator>& str,
               ios_base::openmode which = ios_base::in | ios_base::out);

-3- Effects: Constructs an object of class basic_stringbuf, initializing the base class with basic_streambuf() (27.5.2.1), and initializing mode with which. Then calls str(s). copies the content of str into the basic_stringbuf underlying character sequence. If which & ios_base::out is true, initializes the output sequence such that pbase() points to the first underlying character, epptr() points one past the last underlying character, and pptr() is equal to epptr() if which & ios_base::ate is true, otherwise pptr() is equal to pbase(). If which & ios_base::in is true, initializes the input sequence such that eback() and gptr() point to the first underlying character and egptr() points one past the last underlying character.

Change the Effects clause of the str() in 27.7.1.2, p2 to read:

-2- Effects: Copies the contents of s into the basic_stringbuf underlying character sequence and initializes the input and output sequences according to mode. If mode & ios_base::out is true, initializes the output sequence such that pbase() points to the first underlying character, epptr() points one past the last underlying character, and pptr() is equal to epptr() if mode & ios_base::in is true, otherwise pptr() is equal to pbase(). If mode & ios_base::in is true, initializes the input sequence such that eback() and gptr() point to the first underlying character and egptr() points one past the last underlying character.

-3- Postconditions: If mode & ios_base::out is true, pbase() points to the first underlying character and (epptr() >= pbase() + s.size()) holds; in addition, if mode & ios_base::in is true, (pptr() == pbase() + s.data()) holds, otherwise (pptr() == pbase()) is true. If mode & ios_base::in is true, eback() points to the first underlying character, and (gptr() == eback()) and (egptr() == eback() + s.size()) hold.


563. stringbuf seeking from end

Section: 27.7.1.4 [stringbuf.virtuals] Status: New Submitter: Martin Sebor Date: 2006-02-23

View other active issues in [stringbuf.virtuals].

View all other issues in [stringbuf.virtuals].

View all issues with New status.

Discussion:

According to Table 92 (unchanged by issue 432), when (way == end) the newoff value in out mode is computed as the difference between epptr() and pbase().

This value isn't meaningful unless the value of epptr() can be precisely controlled by a program. That used to be possible until we accepted the resolution of issue 432, but since then the requirements on overflow() have been relaxed to allow it to make more than 1 write position available (i.e., by setting epptr() to some unspecified value past pptr()). So after the first call to overflow() positioning the output sequence relative to end will have unspecified results.

In addition, in in|out mode, since (egptr() == epptr()) need not hold, there are two different possible values for newoff: epptr() - pbase() and egptr() - eback().

Proposed resolution:

Change the newoff column in the last row of Table 94 to read:

the end high mark pointer minus the beginning pointer (xend high_mark - xbeg).


564. stringbuf seekpos underspecified

Section: 27.7.1.4 [stringbuf.virtuals] Status: New Submitter: Martin Sebor Date: 2006-02-23

View other active issues in [stringbuf.virtuals].

View all other issues in [stringbuf.virtuals].

View all issues with New status.

Discussion:

The effects of the seekpos() member function of basic_stringbuf simply say that the function positions the input and/or output sequences but fail to spell out exactly how. This is in contrast to the detail in which seekoff() is described.

Proposed resolution:

Change 27.7.1.3, p13 to read:

-13- Effects: Same as seekoff(off_type(sp), ios_base::beg, which). Alters the stream position within the controlled sequences, if possible, to correspond to the stream position stored in sp (as described below).


565. xsputn inefficient

Section: 27.5.2.4.5 [streambuf.virt.put] Status: New Submitter: Martin Sebor Date: 2006-02-23

View all issues with New status.

Discussion:

streambuf::xsputn() is specified to have the effect of "writing up to n characters to the output sequence as if by repeated calls to sputc(c)."

Since sputc() is required to call overflow() when (pptr() == epptr()) is true, strictly speaking xsputn() should do the same. However, doing so would be suboptimal in some interesting cases, such as in unbuffered mode or when the buffer is basic_stringbuf.

Assuming calling overflow() is not really intended to be required and the wording is simply meant to describe the general effect of appending to the end of the sequence it would be worthwhile to mention in xsputn() that the function is not actually required to cause a call to overflow().

Proposed resolution:

Add the following sentence to the xsputn() Effects clause in 27.5.2.4.5, p1 (N1804):

-1- Effects: Writes up to n characters to the output sequence as if by repeated calls to sputc(c). The characters written are obtained from successive elements of the array whose first element is designated by s. Writing stops when either n characters have been written or a call to sputc(c) would return traits::eof(). It is uspecified whether the function calls overflow() when (pptr() == epptr()) becomes true or whether it achieves the same effects by other means.

In addition, I suggest to add a footnote to this function with the same text as Footnote 292 to make it extra clear that derived classes are permitted to override xsputn() for efficiency.


567. streambuf inserter and extractor should be unformatted

Section: 27.6 [iostream.format] Status: New Submitter: Martin Sebor Date: 2006-02-25

View other active issues in [iostream.format].

View all other issues in [iostream.format].

View all issues with New status.

Discussion:

Issue 60 explicitly made the extractor and inserter operators that take a basic_streambuf* argument formatted input and output functions, respectively. I believe that's wrong, certainly in the case of the extractor, since formatted functions begin by extracting and discarding whitespace. The extractor should not discard any characters.

Proposed resolution:

I propose to change each operator to behave as unformatted input and output function, respectively. The changes below are relative to the working draft document number N1804.

Specifically, change 27.6.1.2.3, p14 as follows:

Effects: Behaves as an unformatted input function (as described in 27.6.1.2.127.6.1.3, paragraph 1).

And change 27.6.2.5.3, p7 as follows:

Effects: Behaves as an unformatted output function (as described in 27.6.2.5.127.6.2.6, paragraph 1).


568. log2 overloads missing

Section: TR1 8.16.4 [tr.c99.cmath.over] Status: New Submitter: Paolo Carlini Date: 2006-03-07

View all issues with New status.

Discussion:

log2 is missing from the list of "additional overloads" in TR1 8.16.4 [tr.c99.cmath.over] p1.

Hinnant: This is a TR1 issue only. It is fixed in the current (N2135) WD.

Proposed resolution:

Add log2 to the list of functions in TR1 8.16.4 [tr.c99.cmath.over] p1.


570. Request adding additional explicit specializations of char_traits

Section: 21.1 [char.traits] Status: Open Submitter: Jack Reeves Date: 2006-04-06

View all issues with Open status.

Discussion:

Currently, the Standard Library specifies only a declaration for template class char_traits<> and requires the implementation provide two explicit specializations: char_traits<char> and char_traits<wchar_t>. I feel the Standard should require explicit specializations for all built-in character types, i.e. char, wchar_t, unsigned char, and signed char.

I have put together a paper (N1985) that describes this in more detail and includes all the necessary wording.

[ Portland: Jack will rewrite N1985 to propose a primary template that will work with other integral types. ]

[ Toronto: issue has grown with addition of char16_t and char32_t. ]

Proposed resolution:


573. C++0x file positioning should handle modern file sizes

Section: 27.4.3 [fpos] Status: New Submitter: Beman Dawes Date: 2006-04-12

View all other issues in [fpos].

View all issues with New status.

Discussion:

There are two deficiencies related to file sizes:

  1. It doesn't appear that the Standard Library is specified in a way that handles modern file sizes, which are often too large to be represented by an unsigned long.
  2. The std::fpos class does not currently have the ability to set/get file positions.

The Dinkumware implementation of the Standard Library as shipped with the Microsoft compiler copes with these issues by:

  1. Defining fpos_t be long long, which is large enough to represent any file position likely in the foreseeable future.
  2. Adding member functions to class fpos. For example,
    fpos_t seekpos() const;
    

Because there are so many types relating to file positions and offsets (fpos_t, fpos, pos_type, off_type, streamoff, streamsize, streampos, wstreampos, and perhaps more), it is difficult to know if the Dinkumware extensions are sufficient. But they seem a useful starting place for discussions, and they do represent existing practice.

Proposed resolution:


574. DR 369 Contradicts Text

Section: 27.3 [iostream.objects] Status: New Submitter: Pete Becker Date: 2006-04-18

View all other issues in [iostream.objects].

View all issues with New status.

Discussion:

lib.iostream.objects requires that the standard stream objects are never destroyed, and it requires that they be destroyed.

DR 369 adds words to say that we really mean for ios_base::Init objects to force construction of standard stream objects. It ends, though, with the phrase "these stream objects shall be destroyed after the destruction of dynamically ...". However, the rule for destruction is stated in the standard: "The objects are not destroyed during program execution."

Proposed resolution:


577. upper_bound(first, last, ...) cannot return last

Section: 25.3.3.2 [upper.bound] Status: Ready Submitter: Seungbeom Kim Date: 2006-05-03

View all issues with Ready status.

Discussion:

ISO/IEC 14882:2003 says:

25.3.3.2 upper_bound

Returns: The furthermost iterator i in the range [first, last) such that for any iterator j in the range [first, i) the following corresponding conditions hold: !(value < *j) or comp(value, *j) == false.

From the description above, upper_bound cannot return last, since it's not in the interval [first, last). This seems to be a typo, because if value is greater than or equal to any other values in the range, or if the range is empty, returning last seems to be the intended behaviour. The corresponding interval for lower_bound is also [first, last].

Proposed resolution:

Change [lib.upper.bound]:

Returns: The furthermost iterator i in the range [first, last)] such that for any iterator j in the range [first, i) the following corresponding conditions hold: !(value < *j) or comp(value, *j) == false.


579. erase(iterator) for unordered containers should not return an iterator

Section: 23.1.3 [unord.req] Status: Open Submitter: Joaquín M López Muñoz Date: 2006-06-13

View other active issues in [unord.req].

View all other issues in [unord.req].

View all issues with Open status.

Discussion:

See N2023 for full discussion.

Proposed resolution:

Option 1:

The problem can be eliminated by omitting the requirement that a.erase(q) return an iterator. This is, however, in contrast with the equivalent requirements for other standard containers.

Option 2:

a.erase(q) can be made to compute the next iterator only when explicitly requested: the technique consists in returning a proxy object implicitly convertible to iterator, so that

iterator q1=a.erase(q);

works as expected, while

a.erase(q);

does not ever invoke the conversion-to-iterator operator, thus avoiding the associated computation. To allow this technique, some sections of TR1 along the line "return value is an iterator..." should be changed to "return value is an unspecified object implicitly convertible to an iterator..." Although this trick is expected to work transparently, it can have some collateral effects when the expression a.erase(q) is used inside generic code.

Rationale:

N2023 was discussed in Portland and the consensus was that there appears to be no need for either change proposed in the paper. The consensus opinion was that since the iterator could serve as its own proxy, there appears to be no need for the change. In general, "converts to" is undesirable because it interferes with template matching.

Post Toronto: There does not at this time appear to be consensus with the Portland consensus.


580. unused allocator members

Section: 20.1.2 [allocator.requirements] Status: Open Submitter: Martin Sebor Date: 2006-06-14

View other active issues in [allocator.requirements].

View all other issues in [allocator.requirements].

View all issues with Open status.

Duplicate of: 479

Discussion:

C++ Standard Library templates that take an allocator as an argument are required to call the allocate() and deallocate() members of the allocator object to obtain storage. However, they do not appear to be required to call any other allocator members such as construct(), destroy(), address(), and max_size(). This makes these allocator members less than useful in portable programs.

It's unclear to me whether the absence of the requirement to use these allocator members is an unintentional omission or a deliberate choice. However, since the functions exist in the standard allocator and since they are required to be provided by any user-defined allocator I believe the standard ought to be clarified to explictly specify whether programs should or should not be able to rely on standard containers calling the functions.

I propose that all containers be required to make use of these functions.

[ Batavia: We support this resolution. Martin to provide wording. ]

[ pre-Oxford: Martin provided wording. ]

Proposed resolution:

       

Specifically, I propose to change 23.1 [container.requirements], p9 as follows:        

           

-9- Copy constructors  for all container types defined  in this clause that   are  parametrized  on   Allocator  copy anthe  allocator argument from  their respective first parameters. All other  constructors for  these container types  take an const  Allocator&  argument  (20.1.6),  an allocator whose value_type is the same as the container's value_type. A copy of this  argument isshall be used for any memory  allocation and  deallocation performed, by these  constructors and by all  member functions, during the  lifetime  of each  container  object.   Allocation shall  be performed  "as  if"  by  calling  the  allocate()  member function on  a copy  of the allocator  object of the  appropriate type New  Footnote),   and  deallocation  "as   if"  by  calling deallocate() on  a copy of  the same allocator  object of the corresponding type. A  copy of  this argument  shall also  be used  to  construct and destroy objects whose lifetime  is managed by the container, including but not  limited to those of  the container's value_type, and  to  obtain  their  address.   All  objects  residing  in  storage allocated by a  container's allocator shall be constructed  "as if" by calling the construct() member  function on a copy of the allocator object of  the appropriate type.  The same  objects shall be destroyed "as if"  by calling destroy() on a  copy of the same allocator object  of the same type.  The  address of such objects shall be obtained "as if" by calling the address() member function  on  a  copy  of  the allocator  object  of  the  appropriate type. Finally, a copy  of this argument shall be  used by its container object to determine  the maximum number of objects  of the container's value_type the container may  store at the same time. The container  member function max_size() obtains  this number from      the      value      returned      by     a      call      to get_allocator().max_size(). In   all  container   types  defined   in  this   clause that  are parametrized     on    Allocator,     the    member get_allocator()     returns     a     copy     of     the Allocator     object     used     to    construct     the container.258)

New Footnote: This type  may be different from Allocator: it     may    be     derived    from     Allocator    via Allocator::rebind<U>::other   for  the  appropriate type U.

           
       

The proposed wording seems cumbersome but I couldn't think of a better way   to  describe   the   requirement  that   containers  use   their Allocator  to manage  only objects  (regardless  of their type)  that  persist  over  their  lifetimes  and  not,  for  example, temporaries  created on the  stack. That  is, containers  shouldn't be required  to  call  Allocator::construct(Allocator::allocate(1), elem)  just to  construct a  temporary copy  of an  element, or Allocator::destroy(Allocator::address(temp),     1)    to destroy temporaries.        

[ Howard: This same paragraph will need some work to accommodate 431. ]

[ post Oxford: This would be rendered NAD Editorial by acceptance of N2257. ]


581. flush() not unformatted function

Section: 27.6.2.7 [ostream.unformatted] Status: New Submitter: Martin Sebor Date: 2006-06-14

View all other issues in [ostream.unformatted].

View all issues with New status.

Discussion:

The resolution of issue 60 changed basic_ostream::flush() so as not to require it to behave as an unformatted output function. That has at least two in my opinion problematic consequences:

First, flush() now calls rdbuf()->pubsync() unconditionally, without regard to the state of the stream. I can't think of any reason why flush() should behave differently from the vast majority of stream functions in this respect.

Second, flush() is not required to catch exceptions from pubsync() or set badbit in response to such events. That doesn't seem right either, as most other stream functions do so.

Proposed resolution:

I propose to revert the resolution of issue 60 with respect to flush(). Specifically, I propose to change 27.6.2.6, p7 as follows:

Effects: Behaves as an unformatted output function (as described in 27.6.2.6, paragraph 1). If rdbuf() is not a null pointer, constructs a sentry object. If this object returns true when converted to a value of type bool the function calls rdbuf()->pubsync(). If that function returns -1 calls setstate(badbit) (which may throw ios_base::failure (27.4.4.3)). Otherwise, if the sentry object returns false, does nothing.Does not behave as an unformatted output function (as described in 27.6.2.6, paragraph 1).


582. specialized algorithms and volatile storage

Section: 20.6.4.1 [uninitialized.copy] Status: Open Submitter: Martin Sebor Date: 2006-06-14

View all issues with Open status.

Discussion:

The specialized algorithms [lib.specialized.algorithms] are specified as having the general effect of invoking the following expression:


new (static_cast<void*>(&*i))
    typename iterator_traits<ForwardIterator>::value_type (x)

            

This expression is ill-formed when the type of the subexpression &*i is some volatile-qualified T.

[ Batavia: Lack of support for proposed resolution but agree there is a defect. Howard to look at wording. Concern that move semantics properly expressed if iterator returns rvalue. ]

Proposed resolution:

In order to allow these algorithms to operate on volatile storage I propose to change the expression so as to make it well-formed even for pointers to volatile types. Specifically, I propose the following changes to clauses 20 and 24. Change 20.6.4.1, p1 to read:


Effects:

typedef typename iterator_traits<ForwardIterator>::pointer    pointer;
typedef typename iterator_traits<ForwardIterator>::value_type value_type;

for (; first != last; ++result, ++first)
    new (static_cast<void*>(const_cast<pointer>(&*result))
        value_type (*first);

            

change 20.6.4.2, p1 to read


Effects:

typedef typename iterator_traits<ForwardIterator>::pointer    pointer;
typedef typename iterator_traits<ForwardIterator>::value_type value_type;

for (; first != last; ++result, ++first)
    new (static_cast<void*>(const_cast<pointer>(&*first))
        value_type (*x);

            

and change 20.6.4.3, p1 to read


Effects:

typedef typename iterator_traits<ForwardIterator>::pointer    pointer;
typedef typename iterator_traits<ForwardIterator>::value_type value_type;

for (; n--; ++first)
    new (static_cast<void*>(const_cast<pointer>(&*first))
        value_type (*x);

            

In addition, since there is no partial specialization for iterator_traits<volatile T*> I propose to add one to parallel such specialization for <const T*>. Specifically, I propose to add the following text to the end of 24.3.1, p3:

and for pointers to volatile as


namespace std {
template<class T> struct iterator_traits<volatile T*> {
typedef ptrdiff_t difference_type;
typedef T value_type;
typedef volatile T* pointer;
typedef volatile T& reference;
typedef random_access_iterator_tag iterator_category;
};
}

            

Note that the change to iterator_traits isn't necessary in order to implement the specialized algorithms in a way that allows them to operate on volatile strorage. It is only necesassary in order to specify their effects in terms of iterator_traits as is done here. Implementations can (and some do) achieve the same effect by means of function template overloading.


585. facet error reporting

Section: 22.2 [locale.categories] Status: New Submitter: Martin Sebor, Paolo Carlini Date: 2006-06-22

View other active issues in [locale.categories].

View all other issues in [locale.categories].

View all issues with New status.

Discussion:

Section 22.2, paragraph 2 requires facet get() members that take an ios_base::iostate& argument, err, to ignore the (initial) value of the argument, but to set it to ios_base::failbit in case of a parse error.

We believe there are a few minor problems with this blanket requirement in conjunction with the wording specific to each get() member function.

First, besides get() there are other member functions with a slightly different name (for example, get_date()). It's not completely clear that the intent of the paragraph is to include those as well, and at least one implementation has interpreted the requirement literally.

Second, the requirement to "set the argument to ios_base::failbit suggests that the functions are not permitted to set it to any other value (such as ios_base::eofbit, or even ios_base::eofbit | ios_base::failbit).

However, 22.2.2.1.2, p5 (Stage 3 of num_get parsing) and p6 (bool parsing) specifies that the do_get functions perform err |= ios_base::eofbit, which contradicts the earlier requirement to ignore err's initial value.

22.2.6.1.2, p1 (the Effects clause of the money_get facet's do_get member functions) also specifies that err's initial value be used to compute the final value by ORing it with either ios_base::failbit or withios_base::eofbit | ios_base::failbit.

Proposed resolution:

We believe the intent is for all facet member functions that take an ios_base::iostate& argument to:

To that effect we propose to change 22.2, p2 as follows:

The put() members make no provision for error reporting. (Any failures of the OutputIterator argument must be extracted from the returned iterator.) Unless otherwise specified, the get() members that take an ios_base::iostate& argument whose value they ignore, but set to ios_base::failbit in case of a parse error., err, start by evaluating err = ios_base::goodbit, and may subsequently set err to either ios_base::eofbit, or ios_base::failbit, or ios_base::eofbit | ios_base::failbit in response to reaching the end-of-file or in case of a parse error, or both, respectively.


588. requirements on zero sized tr1::arrays and other details

Section: 23.2.1 [array] Status: New Submitter: Gennaro Prota Date: 2006-07-18

View other active issues in [array].

View all other issues in [array].

View all issues with New status.

Discussion:

The wording used for section 23.2.1 [lib.array] seems to be subtly ambiguous about zero sized arrays (N==0). Specifically:

* "An instance of array<T, N> stores N elements of type T, so that [...]"

Does this imply that a zero sized array object stores 0 elements, i.e. that it cannot store any element of type T? The next point clarifies the rationale behind this question, basically how to implement begin() and end():

* 23.2.1.5 [lib.array.zero], p2: "In the case that N == 0, begin() == end() == unique value."

What does "unique" mean in this context? Let's consider the following possible implementations, all relying on a partial specialization:

a)
    template< typename T >
    class array< T, 0 > {
    
        ....

        iterator begin()
        { return iterator( reinterpret_cast< T * >( this ) ); }
        ....

    };

This has been used in boost, probably intending that the return value had to be unique to the specific array object and that array couldn't store any T. Note that, besides relying on a reinterpret_cast, has (more than potential) alignment problems.

b)
    template< typename T >
    class array< T, 0 > {
    
        T t;

        iterator begin()
        { return iterator( &t ); }
        ....

    };

This provides a value which is unique to the object and to the type of the array, but requires storing a T. Also, it would allow the user to mistakenly provide an initializer list with one element.

A slight variant could be returning *the* null pointer of type T

    return static_cast<T*>(0);

In this case the value would be unique to the type array<T, 0> but not to the objects (all objects of type array<T, 0> with the same value for T would yield the same pointer value).

Furthermore this is inconsistent with what the standard requires from allocation functions (see library issue 9).

c) same as above but with t being a static data member; again, the value would be unique to the type, not to the object.

d) to avoid storing a T *directly* while disallowing the possibility to use a one-element initializer list a non-aggregate nested class could be defined

    struct holder { holder() {} T t; } h;

and then begin be defined as

 iterator begin() { return &h.t; }

But then, it's arguable whether the array stores a T or not. Indirectly it does.

-----------------------------------------------------

Now, on different issues:

* what's the effect of calling assign(T&) on a zero-sized array? There seems to be only mention of front() and back(), in 23.2.1 [lib.array] p4 (I would also suggest to move that bullet to section 23.2.1.5 [lib.array.zero], for locality of reference)

* (minor) the opening paragraph of 23.2.1 [lib.array] wording is a bit inconsistent with that of other sequences: that's not a problem in itself, but compare it for instance with "A vector is a kind of sequence that supports random access iterators"; though the intent is obvious one might argue that the wording used for arrays doesn't tell what an array is, and relies on the reader to infer that it is what the <array> header defines.

* it would be desiderable to have a static const data member of type std::size_t, with value N, for usage as integral constant expression

* section 23.1 [lib.container.requirements] seem not to consider fixed-size containers at all, as it says: "[containers] control allocation and deallocation of these objects [the contained objects] through constructors, destructors, *insert and erase* operations"

* max_size() isn't specified: the result is obvious but, technically, it relies on table 80: "size() of the largest possible container" which, again, doesn't seem to consider fixed size containers

Proposed resolution:


592. Incorrect treatment of rdbuf()->close() return type

Section: 27.8.1.9 [ifstream.members] Status: New Submitter: Christopher Kohlhoff Date: 2006-08-17

View all other issues in [ifstream.members].

View all issues with New status.

Discussion:

I just spotted a minor problem in 27.8.1.7 [lib.ifstream.members] para 4 and also 27.8.1.13 [lib.fstream.members] para 4. In both places it says:

void close();

Effects: Calls rdbuf()->close() and, if that function returns false, ...

However, basic_filebuf::close() (27.8.1.2) returns a pointer to the filebuf on success, null on failure, so I think it is meant to say "if that function returns a null pointer". Oddly, it is correct for basic_ofstream.

Proposed resolution:

Change 27.8.1.7 [lib.ifstream.members], p5:

Effects: Calls rdbuf()->close() and, if that function fails (returns false a null pointer), calls setstate(failbit) (which may throw ios_base::failure (27.4.4.3)).

Change 27.8.1.13 [lib.fstream.members], p5:

Effects: Calls rdbuf()->close() and, if that function fails (returns false a null pointer), calls setstate(failbit) (which may throw ios_base::failure (27.4.4.3)).


595. TR1/C++0x: fabs(complex<T>) redundant / wrongly specified

Section: 26.3 [complex.numbers] Status: New Submitter: Stefan Große Pawig Date: 2006-09-24

View other active issues in [complex.numbers].

View all other issues in [complex.numbers].

View all issues with New status.

Discussion:

TR1 introduced, in the C compatibility chapter, the function fabs(complex<T>):

----- SNIP -----
8.1.1 Synopsis                                [tr.c99.cmplx.syn]

  namespace std {
  namespace tr1 {
[...]
  template<class T> complex<T> fabs(const complex<T>& x);
  } // namespace tr1
  } // namespace std

[...]

8.1.8 Function fabs                          [tr.c99.cmplx.fabs]

1 Effects: Behaves the same as C99 function cabs, defined in
  subclause 7.3.8.1.
----- SNIP -----

The current C++0X draft document (n2009.pdf) adopted this definition in chapter 26.3.1 (under the comment // 26.3.7 values) and 26.3.7/7.

But in C99 (ISO/IEC 9899:1999 as well as the 9899:TC2 draft document n1124), the referenced subclause reads

----- SNIP -----
7.3.8.1 The cabs functions

  Synopsis

1 #include <complex.h>
  double cabs(double complex z);
  float cabsf(float complex z);
  long double cabsl(long double z);

  Description

2 The cabs functions compute the complex absolute value (also called
  norm, modulus, or magnitude) of z.

  Returns

3 The cabs functions return the complex absolute value.
----- SNIP -----

Note that the return type of the cabs*() functions is not a complex type. Thus, they are equivalent to the already well established template<class T> T abs(const complex<T>& x); (26.2.7/2 in ISO/IEC 14882:1998, 26.3.7/2 in the current draft document n2009.pdf).

So either the return value of fabs() is specified wrongly, or fabs() does not behave the same as C99's cabs*().

Proposed resolution:

This depends on the intention behind the introduction of fabs().

If the intention was to provide a /complex/ valued function that calculates the magnitude of its argument, this should be explicitly specified. In TR1, the categorization under "C compatibility" is definitely wrong, since C99 does not provide such a complex valued function.

Also, it remains questionable if such a complex valued function is really needed, since complex<T> supports construction and assignment from real valued arguments. There is no difference in observable behaviour between

  complex<double> x, y;
  y = fabs(x);
  complex<double> z(fabs(x));

and

  complex<double> x, y;
  y = abs(x);
  complex<double> z(abs(x));

If on the other hand the intention was to provide the intended functionality of C99, fabs() should be either declared deprecated or (for C++0X) removed from the standard, since the functionality is already provided by the corresponding overloads of abs().


596. 27.8.1.3 Table 112 omits "a+" and "a+b" modes

Section: 27.8.1.4 [filebuf.members] Status: New Submitter: Thomas Plum Date: 2006-09-26

View other active issues in [filebuf.members].

View all other issues in [filebuf.members].

View all issues with New status.

Discussion:

In testing 27.8.1.3, Table 112 (in the latest N2009 draft), we invoke

   ostr.open("somename", ios_base::out | ios_base::in | ios_base::app)

and we expect the open to fail, because out|in|app is not listed in Table 92, and just before the table we see very specific words:

If mode is not some combination of flags shown in the table then the open fails.

But the corresponding table in the C standard, 7.19.5.3, provides two modes "a+" and "a+b", to which the C++ modes out|in|app and out|in|app|binary would presumably apply.

We would like to argue that the intent of Table 112 was to match the semantics of 7.19.5.3 and that the omission of "a+" and "a+b" was unintentional. (Otherwise there would be valid and useful behaviors available in C file I/O which are unavailable using C++, for no valid functional reason.)

We further request that the missing modes be explicitly restored to the WP, for inclusion in C++0x.

[ Martin adds: ]

...besides "a+" and "a+b" the C++ table is also missing a row for a lone app bit which in at least two current implementation as well as in Classic Iostreams corresponds to the C stdio "a" mode and has been traditionally documented as implying ios::out. Which means the table should also have a row for in|app meaning the same thing as "a+" already proposed in the issue.

Proposed resolution:


597. Decimal: The notion of 'promotion' cannot be emulated by user-defined types.

Section: TRDecimal 3.2 [trdec.types.types] Status: Open Submitter: Daveed Vandevoorde Date: 2006-04-05

View other active issues in [trdec.types.types].

View all other issues in [trdec.types.types].

View all issues with Open status.

Discussion:

In a private email, Daveed writes:

I am not familiar with the C TR, but my guess is that the class type approach still won't match a built-in type approach because the notion of "promotion" cannot be emulated by user-defined types.

Here is an example:


		 struct S {
		   S(_Decimal32 const&);  // Converting constructor
		 };
		 void f(S);

		 void f(_Decimal64);

		 void g(_Decimal32 d) {
		   f(d);
		 }

If _Decimal32 is a built-in type, the call f(d) will likely resolve to f(_Decimal64) because that requires only a promotion, whereas f(S) requires a user-defined conversion.

If _Decimal32 is a class type, I think the call f(d) will be ambiguous because both the conversion to _Decimal64 and the conversion to S will be user-defined conversions with neither better than the other.

Robert comments:

In general, a library of arithmetic types cannot exactly emulate the behavior of the intrinsic numeric types. There are several ways to tell whether an implementation of the decimal types uses compiler intrinisics or a library. For example:

                 _Decimal32 d1;
                 d1.operator+=(5);  // If d1 is a builtin type, this won't compile.

In preparing the decimal TR, we have three options:

  1. require that the decimal types be class types
  2. require that the decimal types be builtin types, like float and double
  3. specify a library of class types, but allow enough implementor latitude that a conforming implementation could instead provide builtin types

We decided as a group to pursue option #3, but that approach implies that implementations may not agree on the semantics of certain use cases (first example, above), or on whether certain other cases are well-formed (second example). Another potentially important problem is that, under the present definition of POD, the decimal classes are not POD types, but builtins will be.

Note that neither example above implies any problems with respect to C-to-C++ compatibility, since neither example can be expressed in C.

Proposed resolution:


606. Decimal: allow narrowing conversions

Section: TRDecimal 3.2 [trdec.types.types] Status: Open Submitter: Martin Sebor Date: 2006-06-15

View other active issues in [trdec.types.types].

View all other issues in [trdec.types.types].

View all issues with Open status.

Discussion:

In c++std-lib-17205, Martin writes:

...was it a deliberate design choice to make narrowing assignments ill-formed while permitting narrowing compound assignments? For instance:

      decimal32 d32;
      decimal64 d64;

      d32 = 64;     // error
      d32 += 64;    // okay

In c++std-lib-17229, Robert responds:

It is a vestige of an old idea that I forgot to remove from the paper. Narrowing assignments should be permitted. The bug is that the converting constructors that cause narrowing should not be explicit. Thanks for pointing this out.

Proposed resolution:

1. In "3.2.2 Class decimal32" synopsis, remove the explicit specifier from the narrowing conversions:

                // 3.2.2.2 conversion from floating-point type:
                explicit decimal32(decimal64 d64);
                explicit decimal32(decimal128 d128);

2. Do the same thing in "3.2.2.2. Conversion from floating-point type."

3. In "3.2.3 Class decimal64" synopsis, remove the explicit specifier from the narrowing conversion:

                // 3.2.3.2 conversion from floating-point type:
                explicit decimal64(decimal128 d128);

4. Do the same thing in "3.2.3.2. Conversion from floating-point type."

[ Redmond: We prefer explicit conversions for narrowing and implicit for widening. ]


607. Concern about short seed vectors

Section: 26.4.7.1 [rand.util.seedseq], TR1 5.1 [tr.rand] Status: New Submitter: Charles Karney Date: 2006-10-26

View other active issues in [rand.util.seedseq].

View all other issues in [rand.util.seedseq].

View all issues with New status.

Discussion:

Short seed vectors of 32-bit quantities all result in different states. However this is not true of seed vectors of 16-bit (or smaller) quantities. For example these two seeds

unsigned short seed = {1, 2, 3};
unsigned short seed = {1, 2, 3, 0};

both pack to

unsigned seed = {0x20001, 0x3};

yielding the same state.

Proposed resolution:

In 26.4.7.1[rand.util.seedseq]/8a, replace

Set begin[0] to 5489 + sN.

where N is the bit length of the sequence used to construct the seed_seq in 26.4.7.1/6 [rand.util.seedseq]. (This quantity is called n in 26.4.7.1/6 [rand.util.seedseq], but n has a different meaning in 26.4.7.1/8 [rand.util.seedseq]. We have 32^(s-1) < N <= 32^s.) Now

unsigned short seed = {1, 2, 3, 0};
unsigned seed = {0x20001, 0x3};

are equivalent (N = 64), but

unsigned short seed = {1, 2, 3};

gives a distinct state (N = 48).


608. Unclear seed_seq construction details

Section: 26.4.7.1 [rand.util.seedseq], TR1 5.1 [tr.rand] Status: New Submitter: Charles Karney Date: 2006-10-26

View other active issues in [rand.util.seedseq].

View all other issues in [rand.util.seedseq].

View all issues with New status.

Discussion:

In 26.4.7.1 [rand.util.seedseq] /6, the order of packing the inputs into b and the treatment of signed quantities is unclear. Better to spell it out.

Proposed resolution:

b = sum( unsigned(begin[i]) 2^(w i), 0 <= i < end-begin )

where w is the bit-width of the InputIterator.


612. numeric_limits::is_modulo insufficently defined

Section: 18.2.1.2 [numeric.limits.members] Status: Open Submitter: Chris Jefferson Date: 2006-11-10

View all other issues in [numeric.limits.members].

View all issues with Open status.

Discussion:

18.2.1.2 55 states that "A type is modulo if it is possible to add two positive numbers together and have a result that wraps around to a third number that is less". This seems insufficent for the following reasons:

  1. Doesn't define what that value recieved is.
  2. Doesn't state the result is repeatable
  3. Doesn't require that doing addition, subtraction and other operations on all values is defined behaviour.

[ Batavia: Related to N2144. Pete: is there an ISO definition of modulo? Underflow on signed behavior is undefined. ]

Proposed resolution:

Suggest 18.2.1.2 [numeric.limits.members[numeric.limits.members], paragraph 57 is ammeded to:

A type is modulo if, it is possible to add two positive numbers and have a result that wraps around to a third number that is less. given any operation involving +,- or * on values of that type whose value would fall outside the range [min(), max()], then the value returned differs from the true value by an integer multiple of (max() - min() + 1).


614. std::string allocator requirements still inconsistent

Section: 21.3 [basic.string] Status: Open Submitter: Bo Persson Date: 2006-12-05

View all other issues in [basic.string].

View all issues with Open status.

Discussion:

This is based on N2134, where 21.3.1/2 states: "... The Allocator object used shall be a copy of the Allocator object passed to the basic_string object's constructor or, if the constructor does not take an Allocator argument, a copy of a default-constructed Allocator object."

Section 21.3.2/1 lists two constructors:

basic_string(const basic_string<charT,traits,Allocator>& str );

basic_string(const basic_string<charT,traits,Allocator>& str ,
             size_type pos , size_type n = npos,
             const Allocator& a = Allocator());

and then says "In the first form, the Allocator value used is copied from str.get_allocator().", which isn't an option according to 21.3.1.

[ Batavia: We need blanket statement to the effect of: ]

  1. If an allocator is passed in, use it, or,
  2. If a string is passed in, use its allocator.

[ Review constructors and functions that return a string; make sure we follow these rules (substr, operator+, etc.). Howard to supply wording. ]

[ Bo adds: The new container constructor which takes only a size_type is not consistent with 23.1 [container.requirements], p9 which says in part:

All other constructors for these container types take an Allocator& argument (20.1.2), an allocator whose value type is the same as the container's value type. A copy of this argument is used for any memory allocation performed, by these constructors and by all member functions, during the lifetime of each container object.
]

Proposed resolution:


617. std::array is a sequence that doesn't satisfy the sequence requirements?

Section: 23.2.1 [array] Status: New Submitter: Bo Persson Date: 2006-12-30

View other active issues in [array].

View all other issues in [array].

View all issues with New status.

Discussion:

The <array> header is given under 23.2 [sequences]. 23.2.1 [array]/paragraph 3 says:

"Unless otherwise specified, all array operations are as described in 23.1 [container.requirements]".

However, array isn't mentioned at all in section 23.1 [container.requirements]. In particular, Table 82 "Sequence requirements" lists several operations (insert, erase, clear) that std::array does not have in 23.2.1 [array].

Also, Table 83 "Optional sequence operations" lists several operations that std::array does have, but array isn't mentioned.

Proposed resolution:


618. valarray::cshift() effects on empty array

Section: 26.5.2.7 [valarray.members] Status: New Submitter: Gabriel Dos Reis Date: 2007-01-10

View all issues with New status.

Discussion:

I would respectfully request an issue be opened with the intention to clarify the wording for size() == 0 for cshift.

Proposed resolution:

Change 26.5.2.7 [valarray.members], paragraph 7:

This function returns an object of class valarray<T>, of length size()., each of whose elements I is (*this)[(I + n ) % size()]. Thus, if element zero is taken as the leftmost element, a positive value of n shifts the elements circularly left n places. When size() is positive, each element at index I of the returned valarray is equivalent to (*this)[(I + n) % size()]. Therefore cshift() returns an n-fold left cyclic rotation of the elements of *this.


620. valid uses of empty valarrays

Section: 26.5.2.1 [valarray.cons] Status: New Submitter: Martin Sebor Date: 2007-01-20

View other active issues in [valarray.cons].

View all other issues in [valarray.cons].

View all issues with New status.

Discussion:

The Effects clause for the default valarray ctor suggests that it is possible to increase the size of an empty valarray object by calling other non-const member functions of the class besides resize(). However, such an interpretation would be contradicted by the requirement on the copy assignment operator (and apparently also that on the computed assignments) that the assigned arrays be the same size. See the reflector discussion starting with c++std-lib-17871.

In addition, Footnote 280 uses some questionable normative language.

Proposed resolution:

Reword the Effects clause and Footnote 280 as follows (26.5.2.1 [valarray.cons]):

valarray();

Effects: Constructs an object of class valarray<T>,279) which has zero length until it is passed into a library function as a modifiable lvalue or through a non-constant this pointer.280)

Postcondition: size() == 0.

Footnote 280: This default constructor is essential, since arrays of valarray are likely to prove useful. There shall also be a way to change the size of an array after initialization; this is supplied by the semantics may be useful. The length of an empty array can be increased after initialization by means of the resize() member function.


621. non-const copy assignment operators of helper arrays

Section: 26.5 [numarray] Status: New Submitter: Martin Sebor Date: 2007-01-20

View all other issues in [numarray].

View all issues with New status.

Discussion:

The computed and "fill" assignment operators of valarray helper array class templates (slice_array, gslice_array, mask_array, and indirect_array) are const member functions of each class template (the latter by the resolution of 123 since they have reference semantics and thus do not affect the state of the object on which they are called. However, the copy assignment operators of these class templates, which also have reference semantics, are non-const. The absence of constness opens the door to speculation about whether they really are intended to have reference semantics (existing implementations vary widely).

Proposed resolution:

Declare the copy assignment operators of all four helper array class templates const.

Specifically, make the following edits:

Change the signature in 26.5.5 [template.slice.array] and 26.5.5.2 [slice.arr.assign] as follows:


slice_array& operator= (const slice_array&) const;

        

Change the signature in 26.5.7 [template.gslice.array] and 26.5.7.2 [gslice.array.assign] as follows:


gslice_array& operator= (const gslice_array&) const;

        

Change the signature in 26.5.8 [template.mask.array] and 26.5.8.2 [mask.array.assign] as follows:


mask_array& operator= (const mask_array&) const;

        

Change the signature in 26.5.9 [template.indirect.array] and 26.5.9.2 [indirect.array.assign] as follows:


indirect_array& operator= (const indirect_array&) const;

        

622. behavior of filebuf dtor and close on error

Section: 27.8.1.17 [fstream.members] Status: New Submitter: Martin Sebor Date: 2007-01-20

View all issues with New status.

Discussion:

basic_filebuf dtor is specified to have the following straightforward effects:

Effects: Destroys an object of class basic_filebuf. Calls close().

close() does a lot of potentially complicated processing, including calling overflow() to write out the termination sequence (to bring the output sequence to its initial shift state). Since any of the functions called during the processing can throw an exception, what should the effects of an exception be on the dtor? Should the dtor catch and swallow it or should it propagate it to the caller? The text doesn't seem to provide any guidance in this regard other than the general restriction on throwing (but not propagating) exceptions from destructors of library classes in 17.4.4.8 [res.on.exception.handling].

Further, the last thing close() is specified to do is call fclose() to close the FILE pointer. The last sentence of the Effects clause reads:

... If any of the calls to overflow or std::fclose fails then close fails.

This suggests that close() might be required to call fclose() if and only if none of the calls to overflow() fails, and avoid closing the FILE otherwise. This way, if overflow() failed to flush out the data, the caller would have the opportunity to try to flush it again (perhaps after trying to deal with whatever problem may have caused the failure), rather than losing it outright.

On the other hand, the function's Postcondition specifies that is_open() == false, which suggests that it should call fclose() unconditionally. However, since Postcondition clauses are specified for many functions in the standard, including constructors where they obviously cannot apply after an exception, it's not clear whether this Postcondition clause is intended to apply even after an exception.

It might be worth noting that the traditional behavior (Classic Iostreams fstream::close() and C fclose()) is to close the FILE unconditionally, regardless of errors.

[ See 397 and 418 for related issues. ]

Proposed resolution:

After discussing this on the reflector (see the thread starting with c++std-lib-17650) we propose that close() be clarified to match the traditional behavior, that is to close the FILE unconditionally, even after errors or exceptions. In addition, we propose the dtor description be amended so as to explicitly require it to catch and swallow any exceptions thrown by close().

Specifically, we propose to make the following edits in 27.8.1.4 [filebuf.members]:


basic_filebuf<charT,traits>* close();

            

Effects: If is_open() == false, returns a null pointer. If a put area exists, calls overflow(traits::eof()) to flush characters. If the last virtual member function called on *this (between underflow, overflow, seekoff, and seekpos) was overflow then calls a_codecvt.unshift (possibly several times) to determine a termination sequence, inserts those characters and calls overflow(traits::eof()) again. Finally, regardless of whether any of the preceding calls fails or throws an exception, the function it closes the file ("as if" by calling std::fclose(file)).334) If any of the calls made by the functionto overflow or, including std::fclose, fails then close fails by returning a null pointer. If one of these calls throws an exception, the exception is caught and rethrown after closing the file.

And to make the following edits in 27.8.1.2 [filebuf.cons].


virtual ~basic_filebuf();

            

Effects: Destroys an object of class basic_filebuf<charT,traits>. Calls close(). If an exception occurs during the destruction of the object, including the call to close(), the exception is caught but not rethrown (see 17.4.4.8 [res.on.exception.handling]).


623. pubimbue forbidden to call imbue

Section: 27.1.1 [iostream.limits.imbue] Status: New Submitter: Martin Sebor Date: 2007-01-20

View all issues with New status.

Discussion:

27.1.1 [iostream.limits.imbue] specifies that "no function described in clause 27 except for ios_base::imbue causes any instance of basic_ios::imbue or basic_streambuf::imbue to be called."

That contradicts the Effects clause for basic_streambuf::pubimbue() which requires the function to do just that: call basic_streambuf::imbue().

Proposed resolution:

To fix this, rephrase the sentence above to allow pubimbue to do what it was designed to do. Specifically. change 27.1.1 [iostream.limits.imbue], p1 to read:

No function described in clause 27 except for ios_base::imbue and basic_filebuf::pubimbue causes any instance of basic_ios::imbue or basic_streambuf::imbue to be called. ...


624. valarray assignment and arrays of unequal length

Section: 26.5.2.2 [valarray.assign] Status: New Submitter: Martin Sebor Date: 2007-01-20

View all issues with New status.

Discussion:

The behavior of the valarray copy assignment operator is defined only when both sides have the same number of elements and the spec is explicit about assignments of arrays of unequal lengths having undefined behavior.

However, the generalized subscripting assignment operators overloaded on slice_array et al (26.5.2.2 [valarray.assign]) don't have any such restriction, leading the reader to believe that the behavior of these overloads is well defined regardless of the lengths of the arguments.

For example, based on the reading of the spec the behavior of the snippet below can be expected to be well-defined:

    const std::slice from_0_to_3 (0, 3, 1);   // refers to elements 0, 1, 2
    const std::valarray<int> a (1, 3);        // a = { 1, 1, 1 }
    std::valarray<int>       b (2, 4);        // b = { 2, 2, 2, 2 }

    b = a [from_0_to_3];
        

In practice, b may end up being { 1, 1, 1 }, { 1, 1, 1, 2 }, or anything else, indicating that existing implementations vary.

Quoting from Section 3.4, Assignment operators, of Al Vermeulen's Proposal for Standard C++ Array Classes (see c++std-lib-704; N0308):

...if the size of the array on the right hand side of the equal sign differs from the size of the array on the left, a run time error occurs. How this error is handled is implementation dependent; for compilers which support it, throwing an exception would be reasonable.

And see more history in N0280.

Proposed resolution:

It has been argued in discussions on the committee's reflector that the semantics of all valarray assignment operators should be permitted to be undefined unless the length of the arrays being assigned is the same as the length of the one being assigned from. See the thread starting at c++std-lib-17786.

In order to reflect such views, the standard must specify that the size of the array referred to by the argument of the assignment must match the size of the array under assignment, for example by adding a Requires clause to 26.5.2.2 [valarray.assign] as follows:

Requires: The length of the array to which the argument refers equals size().

Note that it's far from clear that such leeway is necessary in order to implement valarray efficiently.


625. mixed up Effects and Returns clauses

Section: 17 [library] Status: Open Submitter: Martin Sebor Date: 2007-01-20

View all other issues in [library].

View all issues with Open status.

Discussion:

Many member functions of basic_string are overloaded, with some of the overloads taking a string argument, others value_type*, others size_type, and others still iterators. Often, the requirements on one of the overloads are expressed in the form of Effects, Throws, and in the Working Paper (N2134) also Remark clauses, while those on the rest of the overloads via a reference to this overload and using a Returns clause.

The difference between the two forms of specification is that per 17.3.1.3 [structure.specifications], p3, an Effects clause specifies "actions performed by the functions," i.e., its observable effects, while a Returns clause is "a description of the return value(s) of a function" that does not impose any requirements on the function's observable effects.

Since only Notes are explicitly defined to be informative and all other paragraphs are explicitly defined to be normative, like Effects and Returns, the new Remark clauses also impose normative requirements.

So by this strict reading of the standard there are some member functions of basic_string that are required to throw an exception under some conditions or use specific traits members while many other otherwise equivalent overloads, while obliged to return the same values, aren't required to follow the exact same requirements with regards to the observable effects.

Here's an example of this problem that was precipitated by the change from informative Notes to normative Remarks (presumably made to address 424):

In the Working Paper, find(string, size_type) contains a Remark clause (which is just a Note in the current standard) requiring it to use traits::eq().

find(const charT *s, size_type pos) is specified to return find(string(s), pos) by a Returns clause and so it is not required to use traits::eq(). However, the Working Paper has replaced the original informative Note about the function using traits::length() with a normative requirement in the form of a Remark. Calling traits::length() may be suboptimal, for example when the argument is a very long array whose initial substring doesn't appear anywhere in *this.

Here's another similar example, one that existed even prior to the introduction of Remarks:

insert(size_type pos, string, size_type, size_type) is required to throw out_of_range if pos > size().

insert(size_type pos, string str) is specified to return insert(pos, str, 0, npos) by a Returns clause and so its effects when pos > size() are strictly speaking unspecified.

I believe a careful review of the current Effects and Returns clauses is needed in order to identify all such problematic cases. In addition, a review of the Working Paper should be done to make sure that the newly introduced normative Remark clauses do not impose any undesirable normative requirements in place of the original informative Notes.

[ Batavia: Alan and Pete to work. ]

Proposed resolution:


626. new Remark clauses not documented

Section: 17.3.1.3 [structure.specifications] Status: Open Submitter: Martin Sebor Date: 2007-01-20

View other active issues in [structure.specifications].

View all other issues in [structure.specifications].

View all issues with Open status.

Discussion:

The Remark clauses newly introduced into the Working Paper (N2134) are not mentioned in 17.3.1.3 [structure.specifications] where we list the meaning of Effects, Requires, and other clauses (with the exception of Notes which are documented as informative in 17.3.1.1 [structure.summary], p2, and which they replace in many cases).

Propose add a bullet for Remarks along with a brief description.

[ Batavia: Alan and Pete to work. ]

Proposed resolution:


627. Low memory and exceptions

Section: 18.5.1.1 [new.delete.single] Status: New Submitter: P.J. Plauger Date: 2007-01-23

View all other issues in [new.delete.single].

View all issues with New status.

Discussion:

I recognize the need for nothrow guarantees in the exception reporting mechanism, but I strongly believe that implementors also need an escape hatch when memory gets really low. (Like, there's not enough heap to construct and copy exception objects, or not enough stack to process the throw.) I'd like to think we can put this escape hatch in 18.5.1.1 [new.delete.single], operator new, but I'm not sure how to do it. We need more than a footnote, but the wording has to be a bit vague. The idea is that if new can't allocate something sufficiently small, it has the right to abort/call terminate/call unexpected.

Proposed resolution:


629. complex insertion and locale dependence

Section: 26.3.6 [complex.ops] Status: New Submitter: Gabriel Dos Reis Date: 2007-01-28

View all other issues in [complex.ops].

View all issues with New status.

Discussion:

is there an issue opened for (0,3) as complex number with the French local? With the English local, the above parses as an imaginery complex number. With the French locale it parses as a real complex number.

Further notes/ideas from the lib-reflector, messages 17982-17984:

Add additional entries in num_punct to cover the complex separator (French would be ';').

Insert a space before the comma, which should eliminate the ambiguity.

Solve the problem for ordered sequences in general, perhaps with a dedicated facet. Then complex should use that solution.

Proposed resolution:


630. arrays of valarray

Section: 26.5.2.1 [valarray.cons] Status: New Submitter: Martin Sebor Date: 2007-01-28

View other active issues in [valarray.cons].

View all other issues in [valarray.cons].

View all issues with New status.

Discussion:

Section 26.1 [numeric.requirements], p1 suggests that a valarray specialization on a type T that satisfies the requirements enumerated in the paragraph is itself a valid type on which valarray may be instantiated (Footnote 269 makes this clear). I.e., valarray<valarray<T> > is valid as long as T is valid. However, since implementations of valarray are permitted to initialize storage allocated by the class by invoking the default ctor of T followed by the copy assignment operator, such implementations of valarray wouldn't work with (perhaps user-defined) specializations of valarray whose assignment operator had undefined behavior when the size of its argument didn't match the size of *this. By "wouldn't work" I mean that it would be impossible to resize such an array of arrays by calling the resize() member function on it if the function used the copy assignment operator after constructing all elements using the default ctor (e.g., by invoking new value_type[N]) to obtain default-initialized storage) as it's permitted to do.

Stated more generally, the problem is that valarray<valarray<T> >::resize(size_t) isn't required or guaranteed to have well-defined semantics for every type T that satisfies all requirements in 26.1 [numeric.requirements].

I believe this problem was introduced by the adoption of the resolution outlined in N0857, Assignment of valarrays, from 1996. The copy assignment operator of the original numerical array classes proposed in N0280, as well as the one proposed in N0308 (both from 1993), had well-defined semantics for arrays of unequal size (the latter explicitly only when *this was empty; assignment of non empty arrays of unequal size was a runtime error).

The justification for the change given in N0857 was the "loss of performance [deemed] only significant for very simple operations on small arrays or for architectures with very few registers."

Since tiny arrays on a limited subset of hardware architectures are likely to be an exceedingly rare case (despite the continued popularity of x86) I propose to revert the resolution and make the behavior of all valarray assignment operators well-defined even for non-conformal arrays (i.e., arrays of unequal size). I have implemented this change and measured no significant degradation in performance in the common case (non-empty arrays of equal size). I have measured a 50% (and in some cases even greater) speedup in the case of assignments to empty arrays versus calling resize() first followed by an invocation of the copy assignment operator.

Proposed resolution:

Change 26.5.2.2 [valarray.assign], p1 as follows:

valarray<T>& operator=(const valarray<T>& x);

-1- Each element of the *this array is assigned the value of the corresponding element of the argument array. The resulting behavior is undefined if When the length of the argument array is not equal to the length of the *this array. resizes *this to make the two arrays the same length, as if by calling resize(x.size()), before performing the assignment.

And add a new paragraph just below paragraph 1 with the following text:

-2- Postcondition: size() == x.size().

Also add the following paragraph to 26.5.2.2 [valarray.assign], immediately after p4:

-?- When the length, N of the array referred to by the argument is not equal to the length of *this, the operator resizes *this to make the two arrays the same length, as if by calling resize(N), before performing the assignment.


631. conflicting requirements for BinaryPredicate

Section: 25 [algorithms] Status: Open Submitter: James Kanze Date: 2007-01-31

View all other issues in [algorithms].

View all issues with Open status.

Discussion:

The general requirements for BinaryPredicate (in 25 [algorithms]/8) contradict the implied specific requirements for some functions. In particular, it says that:

[...] if an algorithm takes BinaryPredicate binary_pred as its argument and first1 and first2 as its iterator arguments, it should work correctly in the construct if (binary_pred (*first1 , *first2 )){...}. BinaryPredicate always takes the first iterator type as its first argument, that is, in those cases when T value is part of the signature, it should work correctly in the context of if (binary_pred (*first1 , value)){...}.

In the description of upper_bound (25.3.3.2 [upper.bound]/2), however, the use is described as "!comp(value, e)", where e is an element of the sequence (a result of dereferencing *first).

In the description of lexicographical_compare, we have both "*first1 < *first2" and "*first2 < *first1" (which presumably implies "comp( *first1, *first2 )" and "comp( *first2, *first1 )".

[ Toronto: Moved to Open. ConceptGCC seems to get lower_bound and upper_bound to work withoutt these changes. ]

Proposed resolution:

Logically, the BinaryPredicate is used as an ordering relationship, with the semantics of "less than". Depending on the function, it may be used to determine equality, or any of the inequality relationships; doing this requires being able to use it with either parameter first. I would thus suggest that the requirement be:

[...] BinaryPredicate always takes the first iterator value_type as one of its arguments, it is unspecified which. If an algorithm takes BinaryPredicate binary_pred as its argument and first1 and first2 as its iterator arguments, it should work correctly both in the construct if (binary_pred (*first1 , *first2 )){...} and if (binary_pred (*first2, *first1)){...}. In those cases when T value is part of the signature, it should work correctly in the context of if (binary_pred (*first1 , value)){...} and of if (binary_pred (value, *first1)){...}. [Note: if the two types are not identical, and neither is convertable to the other, this may require that the BinaryPredicate be a functional object with two overloaded operator()() functions. --end note]

Alternatively, one could specify an order for each function. IMHO, this would be more work for the committee, more work for the implementors, and of no real advantage for the user: some functions, such as lexicographical_compare or equal_range, will still require both functions, and it seems like a much easier rule to teach that both functions are always required, rather than to have a complicated list of when you only need one, and which one.


632. Time complexity of size() for std::set

Section: 23.1 [container.requirements] Status: New Submitter: Lionel B Date: 2007-02-01

View other active issues in [container.requirements].

View all other issues in [container.requirements].

View all issues with New status.

Discussion:

A recent news group discussion:

Anyone know if the Standard has anything to say about the time complexity of size() for std::set? I need to access a set's size (/not/ to know if it is empty!) heavily during an algorithm and was thus wondering whether I'd be better off tracking the size "manually" or whether that'd be pointless.

That would be pointless. size() is O(1).

Nit: the standard says "should" have constant time. Implementations may take license to do worse. I know that some do this for std::list<> as a part of some trade-off with other operation.

I was aware of that, hence my reluctance to use size() for std::set.

However, this reason would not apply to std::set<> as far as I can see.

Ok, I guess the only option is to try it and see...

If I have any recommendation to the C++ Standards Committee it is that implementations must (not "should"!) document clearly[1], where known, the time complexity of *all* container access operations.

[1] In my case (gcc 4.1.1) I can't swear that the time complexity of size() for std::set is not documented... but if it is it's certainly well hidden away.

Proposed resolution:


634. allocator.address() doesn't work for types overloading operator&

Section: 20.6.1.1 [allocator.members] Status: New Submitter: Howard Hinnant Date: 2007-02-07

View all other issues in [allocator.members].

View all issues with New status.

Duplicate of: 350

Discussion:

20.6.1.1 [allocator.members] says:

pointer address(reference x) const;

-1- Returns: &x.

20.6.1.1 [allocator.members] defines CopyConstructible which currently not only defines the semantics of copy construction, but also restricts what an overloaded operator& may do. I believe proposals are in the works (such as concepts and rvalue reference) to decouple these two requirements. Indeed it is not evident that we should disallow overloading operator& to return something other than the address of *this.

An example of when you want to overload operator& to return something other than the object's address is proxy references such as vector<bool> (or its replacement, currently code-named bit_vector). Taking the address of such a proxy reference should logically yield a proxy pointer, which when dereferenced, yields a copy of the original proxy reference again.

On the other hand, some code truly needs the address of an object, and not a proxy (typically for determining the identity of an object compared to a reference object). boost has long recognized this dilemma and solved it with boost::addressof. It appears to me that this would be useful functionality for the default allocator. Adopting this definition for allocator::address would free the standard of requiring anything special from types which overload operator&. Issue 580 is expected to make use of allocator::address mandatory for containers.

Proposed resolution:

Change 20.6.1.1 [allocator.members]:

pointer address(reference x) const;

-1- Returns: &x. The actual address of x, even in the presence of an overloaded operator&.

const_pointer address(address(const_reference x) const;

-2- Returns: &x. The actual address of x, even in the presence of an overloaded operator&.

[ post Oxford: This would be rendered NAD Editorial by acceptance of N2257. ]


635. domain of allocator::address

Section: 20.1.2 [allocator.requirements] Status: New Submitter: Howard Hinnant Date: 2007-02-08

View other active issues in [allocator.requirements].

View all other issues in [allocator.requirements].

View all issues with New status.

Discussion:

The table of allocator requirements in 20.1.2 [allocator.requirements] describes allocator::address as:

a.address(r)
a.address(s)

where r and s are described as:

a value of type X::reference obtained by the expression *p.

and p is

a value of type X::pointer, obtained by calling a1.allocate, where a1 == a

This all implies that to get the address of some value of type T that value must have been allocated by this allocator or a copy of it.

However sometimes container code needs to compare the address of an external value of type T with an internal value. For example list::remove(const T& t) may want to compare the address of the external value t with that of a value stored within the list. Similarly vector or deque insert may want to make similar comparisons (to check for self-referencing calls).

Mandating that allocator::address can only be called for values which the allocator allocated seems overly restrictive.

Proposed resolution:

Change 20.1.2 [allocator.requirements]:

r : a value of type X::reference obtained by the expression *p.

s : a value of type X::const_reference obtained by the expression *q or by conversion from a value r.

[ post Oxford: This would be rendered NAD Editorial by acceptance of N2257. ]


638. deque end invalidation during erase

Section: 23.2.2.3 [deque.modifiers] Status: New Submitter: Steve LoBasso Date: 2007-02-17

View all issues with New status.

Discussion:

The standard states at 23.2.2.3 [deque.modifiers]/4:

deque erase(...)

Effects: ... An erase at either end of the deque invalidates only the iterators and the references to the erased elements.

This does not state that iterators to end will be invalidated. It needs to be amended in such a way as to account for end invalidation.

Something like:

Any time the last element is erased, iterators to end are invalidated.

This would handle situations like:

erase(begin(), end())
erase(end() - 1)
pop_back()
resize(n, ...) where n < size()
pop_front() with size() == 1

Proposed resolution:


639. Still problems with exceptions during streambuf IO

Section: 27.6.1.2.3 [istream::extractors], 27.6.2.6.3 [ostream.inserters] Status: New Submitter: Daniel Krügler Date: 2007-02-17

View all other issues in [istream::extractors].

View all issues with New status.

Discussion:

There already exist two active DR's for the wording of 27.6.1.2.3 [istream::extractors]/13 from 14882:2003(E), namely 64 and 413.

Even with these proposed corrections, already maintained in N2134, I have the feeling, that the current wording does still not properly handle the "exceptional" situation. The combination of para 14

"[..] Characters are extracted and inserted until any of the following occurs:

[..]

- an exception occurs (in which case the exception is caught)."

and 15

"If the function inserts no characters, it calls setstate(failbit), which may throw ios_base::failure (27.4.4.3). If it inserted no characters because it caught an exception thrown while extracting characters from *this and failbit is on in exceptions() (27.4.4.3), then the caught exception is rethrown."

both in N2134 seems to imply that any exception, which occurs *after* at least one character has been inserted is caught and lost for ever. It seems that even if failbit is on in exceptions() rethrow is not allowed due to the wording "If it inserted no characters because it caught an exception thrown while extracting".

Is this behaviour by design?

I would like to add that its output counterpart in 27.6.2.6.3 [ostream.inserters]/7-9 (also N2134) does not demonstrate such an exception-loss-behaviour. On the other side, I wonder concerning several subtle differences compared to input::

1) Paragraph 8 says at its end:

"- an exception occurs while getting a character from sb."

Note that there is nothing mentioned which would imply that such an exception will be caught compared to 27.6.1.2.3 [istream::extractors]/14.

2) Paragraph 9 says:

"If the function inserts no characters, it calls setstate(failbit) (which may throw ios_base::failure (27.4.4.3)). If an exception was thrown while extracting a character, the function sets failbit in error state, and if failbit is on in exceptions() the caught exception is rethrown."

The sentence starting with "If an exception was thrown" seems to imply that such an exception *should* be caught before.

Proposed resolution:

(a) In 27.6.1.2.3 [istream::extractors]/15 (N2134) change the sentence

If the function inserts no characters, it calls setstate(failbit), which may throw ios_base::failure (27.4.4.3). If it inserted no characters because it caught an exception thrown while extracting characters from *this an exception was thrown while extracting a character from *this, the function sets failbit in error state, and failbit is on in exceptions() (27.4.4.3), then the caught exception is rethrown.

(b) In 27.6.2.6.3 [ostream.inserters]/8 (N2134) change the sentence:

Gets characters from sb and inserts them in *this. Characters are read from sb and inserted until any of the following occurs:


645. Missing members in match_results

Section: 28.10 [re.results] Status: New Submitter: Daniel Krügler Date: 2007-02-26

View other active issues in [re.results].

View all other issues in [re.results].

View all issues with New status.

Discussion:

According to the description given in 28.10 [re.results]/2 the class template match_results "shall satisfy the requirements of a Sequence, [..], except that only operations defined for const-qualified Sequences are supported". Comparing the provided operations from 28.10 [re.results]/3 with the sequence/container tables 80 and 81 one recognizes the following missing operations:

1) The members

const_iterator rbegin() const;
const_iterator rend() const;

should exists because 23.1/10 demands these for containers (all sequences are containers) which support bidirectional iterators. Aren't these supported by match_result? This is not explicitely expressed, but it's somewhat implied by two arguments:

(a) Several typedefs delegate to iterator_traits<BidirectionalIterator>.

(b) The existence of const_reference operator[](size_type n) const implies even random-access iteration. I also suggest, that match_result should explicitly mention, which minimum iterator category is supported and if this does not include random-access the existence of operator[] is somewhat questionable.

2) The new "convenience" members

const_iterator cbegin() const;
const_iterator cend() const;
const_iterator crbegin() const;
const_iterator crend() const;

should be added according to tables 80/81.

Proposed resolution:


650. regex_token_iterator and const correctness

Section: 28.12.2 [re.tokiter] Status: New Submitter: Daniel Krügler Date: 2007-03-05

View all other issues in [re.tokiter].

View all issues with New status.

Discussion:

Both the class definition of regex_token_iterator (28.12.2 [re.tokiter]/6) and the latter member specifications (28.12.2.2 [re.tokiter.comp]/1+2) declare both comparison operators as non-const functions. Furtheron, both dereference operators are unexpectedly also declared as non-const in 28.12.2 [re.tokiter]/6 as well as in (28.12.2.3 [re.tokiter.deref]/1+2).

Proposed resolution:

1) In (28.12.2 [re.tokiter]/6) change the current declarations

bool operator==(const regex_token_iterator&) const;
bool operator!=(const regex_token_iterator&) const;
const value_type& operator*() const;
const value_type* operator->() const;

2) In 28.12.2.2 [re.tokiter.comp] change the following declarations

bool operator==(const regex_token_iterator& right) const;
bool operator!=(const regex_token_iterator& right) const;

3) In 28.12.2.3 [re.tokiter.deref] change the following declarations

const value_type& operator*() const;
const value_type* operator->() const;

651. Missing preconditions for regex_token_iterator c'tors

Section: 28.12.2.1 [re.tokiter.cnstr] Status: New Submitter: Daniel Krügler Date: 2007-03-05

View all other issues in [re.tokiter.cnstr].

View all issues with New status.

Discussion:

The text provided in 28.12.2.1 [re.tokiter.cnstr]/2+3 describes the effects of the three non-default constructors of class template regex_token_iterator but is does not clarify which values are legal values for submatch/submatches. This becomes an issue, if one takes 28.12.2 [re.tokiter]/9 into account, which explains the notion of a "current match" by saying:

The current match is (*position).prefix() if subs[N] == -1, or (*position)[subs[N]] for any other value of subs[N].

It's not clear to me, whether other negative values except -1 are legal arguments or not - it seems they are not.

Proposed resolution:

Add the following precondition paragraph just before the current 28.12.2.1 [re.tokiter.cnstr]/2:

Requires: Each of the initialization values of subs must be >= -1.


652. regex_iterator and const correctness

Section: 28.12.1 [re.regiter] Status: New Submitter: Daniel Krügler Date: 2007-03-05

View all issues with New status.

Discussion:

Both the class definition of regex_iterator (28.12.1 [re.regiter]/1) and the latter member specification (28.12.1.2 [re.regiter.comp]/1+2) declare both comparison operators as non-const functions. Furtheron, both dereference operators are unexpectedly also declared as non-const in 28.12.1 [re.regiter]/1 as well as in (28.12.1.3 [re.regiter.deref]/1+2).

Proposed resolution:

1) In (28.12.1 [re.regiter]/1) change the current declarations

bool operator==(const regex_iterator&) const;
bool operator!=(const regex_iterator&) const;
const value_type& operator*() const;
const value_type* operator->() const;

2) In 28.12.1.3 [re.regiter.deref] change the following declarations

const value_type& operator*() const;
const value_type* operator->() const;

3) In 28.12.1.2 [re.regiter.comp] change the following declarations

bool operator==(const regex_iterator& right) const;
bool operator!=(const regex_iterator& right) const;

653. Library reserved names

Section: 1.2 [intro.refs] Status: New Submitter: Alisdair Meredith Date: 2007-03-08

View all other issues in [intro.refs].

View all issues with New status.

Discussion:

1.2 [intro.refs] Normative references

The following standards contain provisions which, through reference in this text, constitute provisions of this Interna- tional Standard. At the time of publication, the editions indicated were valid. All standards are subject to revision, and parties to agreements based on this International Standard are encouraged to investigate the possibility of applying the most recent editions of the standards indicated below. Members of IEC and ISO maintain registers of currently valid International Standards.

I'm not sure how many of those reserve naming patterns that might affect us, but I am equally sure I don't own a copy of any of these to check!

The point is to list the reserved naming patterns, rather than the individual names themselves - although we may want to list C keywords that are valid identifiers in C++ but likely to cause trouble in shared headers (e.g. restrict)

Proposed resolution:


654. Missing IO roundtrip for random number engines

Section: 26.4.1.3 [rand.req.eng] Status: New Submitter: Daniel Krügler Date: 2007-03-08

View other active issues in [rand.req.eng].

View all other issues in [rand.req.eng].

View all issues with New status.

Discussion:

Table 98 and para 5 in 26.4.1.3 [rand.req.eng] specify the IO insertion and extraction semantic of random number engines. It can be shown, v.i., that the specification of the extractor cannot guarantee to fulfill the requirement from para 5:

If a textual representation written via os << x was subsequently read via is >> v, then x == v provided that there have been no intervening invocations of x or of v.

The problem is, that the extraction process described in table 98 misses to specify that it will initially set the if.fmtflags to ios_base::dec, see table 104:

dec: converts integer input or generates integer output in decimal base

Proof: The following small program demonstrates the violation of requirements (exception safety not fulfilled):

#include <cassert>
#include <ostream>
#include <iostream>
#include <iomanip>
#include <sstream>

class RanNumEngine {
  int state;
public:
  RanNumEngine() : state(42) {}

  bool operator==(RanNumEngine other) const {
	  return state == other.state;
  }

  template <typename Ch, typename Tr>
  friend std::basic_ostream<Ch, Tr>& operator<<(std::basic_ostream<Ch, Tr>& os, RanNumEngine engine) {
	Ch old = os.fill(os.widen(' ')); // Sets space character
	std::ios_base::fmtflags f = os.flags();
	os << std::dec << std::left << engine.state; // Adds ios_base::dec|ios_base::left
	os.fill(old); // Undo
	os.flags(f);
	return os;
  }

  template <typename Ch, typename Tr>
  friend std::basic_istream<Ch, Tr>& operator>>(std::basic_istream<Ch, Tr>& is, RanNumEngine& engine) {
       // Uncomment only for the fix.

	//std::ios_base::fmtflags f = is.flags();
	//is >> std::dec;
	is >> engine.state;
	//is.flags(f);
	return is;
  }
};

int main() {
	std::stringstream s;
	s << std::setfill('#'); // No problem
        s << std::oct; // Yikes!
        // Here starts para 5 requirements:
	RanNumEngine x;
	s << x;
	RanNumEngine v;
	s >> v;
	assert(x == v); // Fails: 42 == 34
}

A second, minor issue seems to be, that the insertion description from table 98 unnecessarily requires the addition of ios_base::fixed (which only influences floating-point numbers). Its not entirely clear to me whether the proposed standard does require that the state of random number engines is stored in integral types or not, but I have the impression that this is the indent, see e.g. p. 3

The specification of each random number engine defines the size of its state in multiples of the size of its result_type.

If other types than integrals are supported, then I wonder why no requirements are specified for the precision of the stream.

Proposed resolution:

1) In table 98 from 26.4.1.3 [rand.req.eng] in column "pre/post-condition", row expression "is >> x" change

Sets v's state as determined by reading its textual representation with is.fmtflags set to ios_base::dec from is.

2) In table 98 from 26.4.1.3 [rand.req.eng] in column "pre/post-condition", row expression "os << x" change

With os.fmtflags set to ios_base::dec|ios_base::fixed|ios_base::left and[..]


655. Signature of generate_canonical not useful

Section: 26.4.7.2 [rand.util.canonical] Status: New Submitter: Daniel Krügler Date: 2007-03-08

View all issues with New status.

Discussion:

In 26.4.2 [rand.synopsis] we have the declaration

template<class RealType, class UniformRandomNumberGenerator,
  size_t bits>
result_type generate_canonical(UniformRandomNumberGenerator& g);

Besides the "result_type" issue (already recognized by Bo Persson at Sun, 11 Feb 2007 05:26:47 GMT in this group) it's clear, that the template parameter order is not reasonably choosen: Obviously one always needs to specify all three parameters, although usually only two are required, namely the result type RealType and the wanted bits, because UniformRandomNumberGenerator can usually be deduced.

Proposed resolution:

In the header <random> synopsis 26.4.2 [rand.synopsis] as well as in the corresponding function description in 26.4.7.2 [rand.util.canonical]26.4.7.2 between para 2 and 3 change the declaration

template<class RealType, class UniformRandomNumberGenerator, size_t bits, class UniformRandomNumberGenerator>
RealType generate_canonical(UniformRandomNumberGenerator& g);

657. unclear requirement about header inclusion

Section: 17.4.2.1 [using.headers] Status: New Submitter: Gennaro Prota Date: 2007-03-14

View all issues with New status.

Discussion:

17.4.2.1 [using.headers] states:

A translation unit shall include a header only outside of any external declaration or definition, [...]

I see three problems with this requirement:

  1. The C++ standard doesn't define what an "external declaration" or an "external definition" are (incidentally the C99 standard does, and has a sentence very similar to the above regarding header inclusion).

    I think the intent is that the #include directive shall lexically appear outside *any* declaration; instead, when the issue was pointed out on comp.std.c++ at least one poster interpreted "external declaration" as "declaration of an identifier with external linkage". If this were the correct interpretation, then the two inclusions below would be legal:

      // at global scope
      static void f()
      {
    # include <cstddef>
      }
    
      static void g()
      {
    # include <stddef.h>
      }
    

    (note that while the first example is unlikely to compile correctly, the second one may well do)

  2. as the sentence stands, violations will require a diagnostic; is this the intent? It was pointed out on comp.std.c++ (by several posters) that at least one way to ensure a diagnostic exists:

    [If there is an actual file for each header,] one simple way to implement this would be to insert a reserved identifier such as __begin_header at the start of each standard header. This reserved identifier would be ignored for all other purposes, except that, at the appropriate point in phase 7, if it is found inside an external definition, a diagnostic is generated. There's many other similar ways to achieve the same effect.

    --James Kuyper, on comp.std.c++

  3. is the term "header" meant to be limited to standard headers? Clause 17 is all about the library, but still the general question is interesting and affects one of the points in the explicit namespaces proposal (n1691):

    Those seeking to conveniently enable argument-dependent lookups for all operators within an explicit namespace could easily create a header file that does so:

        namespace mymath::
        {
            #include "using_ops.hpp"
        }
    

Proposed resolution:


659. istreambuf_iterator should have an operator->()

Section: 24.5.3 [istreambuf.iterator] Status: New Submitter: Niels Dekker Date: 2007-03-25

View all other issues in [istreambuf.iterator].

View all issues with New status.

Discussion:

Greg Herlihy has clearly demonstrated that a user defined input iterator should have an operator->(), even if its value type is a built-in type (comp.std.c++, "Re: Should any iterator have an operator->() in C++0x?", March 2007).  And as Howard Hinnant remarked in the same thread that the input iterator istreambuf_iterator doesn't have one, this must be a defect!

Based on Greg's example, the following code demonstrates the issue:

 #include <iostream> 
 #include <fstream>
 #include <streambuf> 

 typedef char C;
 int main ()
 {
   std::ifstream s("filename", std::ios::in);
   std::istreambuf_iterator<char> i(s);

   (*i).~C();  // This is well-formed...
   i->~C();  // ... so this should be supported!
 }

Of course, operator-> is also needed when the value_type of istreambuf_iterator is a class.

The operator-> could be implemented in various ways.  For instance, by storing the current value inside the iterator, and returning its address.  Or by returning a proxy, like operator_arrow_proxy, from http://www.boost.org/boost/iterator/iterator_facade.hpp

I hope that the resolution of this issue will contribute to getting a clear and consistent definition of iterator concepts.

Proposed resolution:

Add to the synopsis in 24.5.3 [istreambuf.iterator]:

charT operator*() const;
pointer operator->() const;
istreambuf_iterator<charT,traits>& operator++();

Change 24.5.3 [istreambuf.iterator], p1:

The class template istreambuf_iterator reads successive characters from the streambuf for which it was constructed. operator* provides access to the current input character, if any. operator-> may return a proxy. Each time operator++ is evaluated, the iterator advances to the next input character. If the end of stream is reached (streambuf_type::sgetc() returns traits::eof()), the iterator becomes equal to the end of stream iterator value. The default constructor istreambuf_iterator() and the constructor istreambuf_iterator(0) both construct an end of stream iterator object suitable for use as an end-of-range.


660. Missing Bitwise Operations

Section: 20.5 [function.objects] Status: Ready Submitter: Beman Dawes Date: 2007-04-02

View all other issues in [function.objects].

View all issues with Ready status.

Discussion:

Section 20.5 [function.objects] provides function objects for some unary and binary operations, but others are missing. In a LWG reflector discussion, beginning with c++std-lib-18078, pros and cons of adding some of the missing operations were discussed. Bjarne Stroustrup commented "Why standardize what isn't used? Yes, I see the chicken and egg problems here, but it would be nice to see a couple of genuine uses before making additions."

A number of libraries, including Rogue Wave, GNU, Adobe ASL, and Boost, have already added these functions, either publicly or for internal use. For example, Doug Gregor commented: "Boost will also add ... (|, &, ^) in 1.35.0, because we need those function objects to represent various parallel collective operations (reductions, prefix reductions, etc.) in the new Message Passing Interface (MPI) library."

Because the bitwise operators have the strongest use cases, the proposed resolution is limited to them.

Proposed resolution:

To 20.5 [function.objects], Function objects, paragraph 2, add to the header <functional> synopsis:

template <class T> struct bit_and;
template <class T> struct bit_or;
template <class T> struct bit_xor;

At a location in clause 20 to be determined by the Project Editor, add:

The library provides basic function object classes for all of the bitwise operators in the language ([expr.bit.and], [expr.or], [exp.xor]).

template <class T> struct bit_and : binary_function<T,T,T> {
  T operator()(const T& x , const T& y ) const;
};

operator() returns x & y .

template <class T> struct bit_or : binary_function<T,T,T> {
  T operator()(const T& x , const T& y ) const;
};

operator() returns x | y .

template <class T> struct bit_xor : binary_function<T,T,T> {
  T operator()(const T& x , const T& y ) const;
};

operator() returns x ^ y .


661. New 27.6.1.2.2 changes make special extractions useless

Section: 27.6.1.2.2 [istream.formatted.arithmetic] Status: New Submitter: Daniel Krügler Date: 2007-04-01

View other active issues in [istream.formatted.arithmetic].

View all other issues in [istream.formatted.arithmetic].

View all issues with New status.

Discussion:

To the more drastic changes of 27.6.1.2.2 [istream.formatted.arithmetic] in the current draft N2134 belong the explicit description of the extraction of the types short and int in terms of as-if code fragments.

  1. The corresponding as-if extractions in paragraph 2 and 3 will never result in a change of the operator>> argument val, because the contents of the local variable lval is in no case written into val. Furtheron both fragments need a currently missing parentheses in the beginning of the if-statement to be valid C++.
  2. I would like to ask whether the omission of a similar explicit extraction of unsigned short and unsigned int in terms of long - compared to their corresponding new insertions, as described in 27.6.2.6.2 [ostream.inserters.arithmetic], is a deliberate decision or an oversight.

Proposed resolution:

  1. In 27.6.1.2.2 [istream.formatted.arithmetic]/2 change the current as-if code fragment

    typedef num_get<charT,istreambuf_iterator<charT,traits> > numget;
    iostate err = 0;
    long lval;
    use_facet<numget>(loc).get(*this, 0, *this, err, lval );
    if (err == 0) {
      && if (lval < numeric_limits<short>::min() || numeric_limits<short>::max() < lval))
          err = ios_base::failbit;
      else
        val = static_cast<short>(lval);
    }
    setstate(err);
    

    Similarily in 27.6.1.2.2 [istream.formatted.arithmetic]/3 change the current as-if fragment

    typedef num_get<charT,istreambuf_iterator<charT,traits> > numget;
    iostate err = 0;
    long lval;
    use_facet<numget>(loc).get(*this, 0, *this, err, lval );
    if (err == 0) {
      && if (lval < numeric_limits<int>::min() || numeric_limits<int>::max() < lval))
          err = ios_base::failbit;
      else
        val = static_cast<int>(lval);
    }
    setstate(err);
    
  2. ---

663. Complexity Requirements

Section: 17.3.1.3 [structure.specifications] Status: New Submitter: Thomas Plum Date: 2007-04-16

View other active issues in [structure.specifications].

View all other issues in [structure.specifications].

View all issues with New status.

Discussion:

17.3.1.3 [structure.specifications] para 5 says

-5- Complexity requirements specified in the library  clauses are upper bounds, and implementations that provide better complexity guarantees satisfy the requirements.

The following objection has been raised:

The library clauses suggest general guidelines regarding complexity, but we have been unable to discover any absolute hard-and-fast formulae for these requirements.  Unless or until the Library group standardizes specific hard-and-fast formulae, we regard all the complexity requirements as subject to a  "fudge factor" without any intrinsic upper bound.

[Plum ref  _23213Y31 etc]

Proposed resolution:


664. do_unshift for codecvt<char, char, mbstate_t>

Section: 22.2.1.4.2 [locale.codecvt.virtuals] Status: New Submitter: Thomas Plum Date: 2007-04-16

View other active issues in [locale.codecvt.virtuals].

View all other issues in [locale.codecvt.virtuals].

View all issues with New status.

Discussion:

22.2.1.4.2 [locale.codecvt.virtuals], para 7 says (regarding do_unshift):

Effects: Places characters starting at to that should be appended to terminate a sequence when the current stateT is given by state.237) Stores no more than (to_limit - to) destination elements, and leaves the to_next pointer pointing one beyond the last element successfully stored. codecvt<char, char, mbstate_t> stores no characters.

The following objection has been raised:

Since the C++ Standard permits a nontrivial conversion for the required instantiations of codecvt, it is overly restrictive to say that do_unshift must store no characters and return noconv.

[Plum ref _222152Y50]

Proposed resolution:


665. do_unshift return value

Section: 22.2.1.4.2 [locale.codecvt.virtuals] Status: New Submitter: Thomas Plum Date: 2007-04-16

View other active issues in [locale.codecvt.virtuals].

View all other issues in [locale.codecvt.virtuals].

View all issues with New status.

Discussion:

22.2.1.4.2 [locale.codecvt.virtuals], para 8 says:

codecvt<char,char,mbstate_t>, returns noconv.

The following objection has been raised:

Despite what the C++ Standard  says, unshift can't always return noconv for the default facets, since  they can be nontrivial. At least one implementation does whatever the  C functions do.

[Plum ref _222152Y62]

Proposed resolution:


666. moneypunct::do_curr_symbol()

Section: 22.2.6.3.2 [locale.moneypunct.virtuals] Status: New Submitter: Thomas Plum Date: 2007-04-16

View all other issues in [locale.moneypunct.virtuals].

View all issues with New status.

Discussion:

22.2.6.3.2 [locale.moneypunct.virtuals], para 4 footnote 257 says

257) For international  specializations (second template parameter true) this is always four  characters long, usually three letters and a space.

The following objection has been raised:

The international currency  symbol is whatever the underlying locale says it is, not necessarily  four characters long.

[Plum ref _222632Y41]

Proposed resolution:


667. money_get's widened minus sign

Section: 22.2.6.1.2 [locale.money.get.virtuals] Status: New Submitter: Thomas Plum Date: 2007-04-16

View other active issues in [locale.money.get.virtuals].

View all other issues in [locale.money.get.virtuals].

View all issues with New status.

Discussion:

22.2.6.1.2 [locale.money.get.virtuals], para 1 says:

The result is returned as an integral value  stored in units or as a sequence of digits possibly preceded by a  minus sign (as produced by ct.widen(c) where c is '-' or in the range  from '0' through '9', inclusive) stored in digits.

The following objection has been raised:

Some implementations interpret this to mean that a facet derived from ctype<wchar_t> can provide its own member do_widen(char) which produces e.g. L'@' for the "widened" minus sign, and that the '@' symbol will appear in the resulting sequence of digits.  Other implementations have assumed that one or more places in the standard permit the implementation to "hard-wire" L'-' as the "widened" minus sign.  Are both interpretations permissible, or only  one?

[Plum ref _222612Y14]

Furthermore: if ct.widen('9') produces L'X' (a non-digit), does a parse fail if a '9' appears in the subject string? [Plum ref _22263Y33]

Proposed resolution:


668. money_get's empty minus sign

Section: 22.2.6.1.2 [locale.money.get.virtuals] Status: New Submitter: Thomas Plum Date: 2007-04-16

View other active issues in [locale.money.get.virtuals].

View all other issues in [locale.money.get.virtuals].

View all issues with New status.

Discussion:

22.2.6.1.2 [locale.money.get.virtuals], para 3 says:

If pos or neg is empty, the sign component is optional, and if no sign is detected, the result is given the sign  that corresponds to the source of the empty string.

The following objection has been raised:

A negative_sign of "" means "there is no  way to write a negative sign" not "any null sequence is a negative  sign, so it's always there when you look for it".

[Plum ref _222612Y32]

Proposed resolution:


669. Equivalent postive and negative signs in money_get

Section: 22.2.6.1.2 [locale.money.get.virtuals] Status: New Submitter: Thomas Plum Date: 2007-04-16

View other active issues in [locale.money.get.virtuals].

View all other issues in [locale.money.get.virtuals].

View all issues with New status.

Discussion:

22.2.6.1.2 [locale.money.get.virtuals], para 3 sentence 4 says:

If the first character of pos is equal to the first character of neg,  or if both strings are empty, the result is given a positive sign.

One interpretation is that an input sequence must match either the positive pattern or the negative pattern, and then in either event it is interpreted as positive.  The following objections has been raised:

The input can successfully match only a positive sign, so the negative pattern is an unsuccessful match.

[Plum ref _222612Y34, 222612Y51b]

Proposed resolution:


670. money_base::pattern and space

Section: 22.2.6.3 [locale.moneypunct] Status: New Submitter: Thomas Plum Date: 2007-04-16

View all issues with New status.

Discussion:

22.2.6.3 [locale.moneypunct], para 2 says:

The value space indicates that at least one space is required at  that position.

The following objection has been raised:

Whitespace is optional when matching space. (See 22.2.6.1.2 [locale.money.get.virtuals], para 2.)

[Plum ref _22263Y22]

Proposed resolution:


671. precision of hexfloat

Section: 22.2.2.2.2 [facet.num.put.virtuals] Status: New Submitter: John Salmon Date: 2007-04-20

View all other issues in [facet.num.put.virtuals].

View all issues with New status.

Discussion:

I am trying to understand how TR1 supports hex float (%a) output.

As far as I can tell, it does so via the following:

8.15 Additions to header <locale> [tr.c99.locale]

In subclause 22.2.2.2.2 [facet.num.put.virtuals], Table 58 Floating-point conversions, after the line: floatfield == ios_base::scientific %E

add the two lines:

floatfield == ios_base::fixed | ios_base::scientific && !uppercase %a
floatfield == ios_base::fixed | ios_base::scientific %A 2

[Note: The additional requirements on print and scan functions, later in this clause, ensure that the print functions generate hexadecimal floating-point fields with a %a or %A conversion specifier, and that the scan functions match hexadecimal floating-point fields with a %g conversion specifier.  end note]

Following the thread, in 22.2.2.2.2 [facet.num.put.virtuals], we find:

For conversion from a floating-point type, if (flags & fixed) != 0 or if str.precision() > 0, then str.precision() is specified in the conversion specification.

This would seem to imply that when floatfield == fixed|scientific, the precision of the conversion specifier is to be taken from str.precision().  Is this really what's intended?  I sincerely hope that I'm either missing something or this is an oversight.  Please tell me that the committee did not intend to mandate that hex floats (and doubles) should by default be printed as if by %.6a.

[ Howard: I think the fundamental issue we overlooked was that with %f, %e, %g, the default precision was always 6.  With %a the default precision is not 6, it is infinity.  So for the first time, we need to distinguish between the default value of precision, and the precision value 6. ]

Proposed resolution:


672. Swappable requirements need updating

Section: 20.1.1 [utility.arg.requirements] Status: New Submitter: Howard Hinnant Date: 2007-05-04

View all other issues in [utility.arg.requirements].

View all issues with New status.

Discussion:

The current Swappable is:

Table 37: Swappable requirements [swappable]
expressionreturn typepost-condition
swap(s,t)voidt has the value originally held by u, and u has the value originally held by t

The Swappable requirement is met by satisfying one or more of the following conditions:

  • T is Swappable if T satisfies the CopyConstructible requirements (Table 34) and the CopyAssignable requirements (Table 36);
  • T is Swappable if a namespace scope function named swap exists in the same namespace as the definition of T, such that the expression swap(t,u) is valid and has the semantics described in this table.

With the passage of rvalue reference into the language, Swappable needs to be updated to require only MoveConstructible and MoveAssignable. This is a minimum.

Additionally we may want to support proxy references such that the following code is acceptable:

namespace Mine {

template <class T>
struct proxy {...};

template <class T>
struct proxied_iterator
{
   typedef T value_type;
   typedef proxy<T> reference;
   reference operator*() const;
   ...
};

struct A
{
   // heavy type, has an optimized swap, maybe isn't even copyable or movable, just swappable
   void swap(A&);
   ...
};

void swap(A&, A&);
void swap(proxy<A>, A&);
void swap(A&, proxy<A>);
void swap(proxy<A>, proxy<A>);

}  // Mine

...

Mine::proxied_iterator<Mine::A> i(...)
Mine::A a;
swap(*i1, a);

I.e. here is a call to swap which the user enables swapping between a proxy to a class and the class itself. We do not need to anything in terms of implementation except not block their way with overly constrained concepts. That is, the Swappable concept should be expanded to allow swapping between two different types for the case that one is binding to a user-defined swap.

Proposed resolution:

Change 20.1.1 [utility.arg.requirements]:

-1- The template definitions in the C++ Standard Library refer to various named requirements whose details are set out in tables 31-38. In these tables, T and V are is a types to be supplied by a C++ program instantiating a template; a, b, and c are values of type const T; s and t are modifiable lvalues of type T; u is a value of type (possibly const) T; and rv is a non-const rvalue of type T; and v is a value of type V.

Table 37: Swappable requirements [swappable]
expressionreturn typepost-condition
swap(s,tv)void ts has the value originally held by uv, and uv has the value originally held by ts

The Swappable requirement is met by satisfying one or more of the following conditions:

  • T is Swappable if T and V are the same type and T satisfies the CopyConstructible MoveConstructible requirements (Table 34 33) and the CopyAssignable MoveAssignable requirements (Table 36 35);
  • T is Swappable with V if a namespace scope function named swap exists in the same namespace as the definition of T or V, such that the expression swap(ts,u v) is valid and has the semantics described in this table.

[Some editorial issues are also cleaned up by this resolution.]


673. unique_ptr update

Section: 20.6.5 [unique.ptr] Status: New Submitter: Howard Hinnant Date: 2007-05-04

View all issues with New status.

Discussion:

Since the publication of N1856 there have been a few small but significant advances which should be included into unique_ptr. There exists a reference implmenation for all of these changes.

Proposed resolution:

[ I am grateful for the generous aid of Peter Dimov and Ion Gaztañaga in helping formulate and review the proposed resolutions below. ]


674. shared_ptr interface changes for consistency with N1856

Section: 20.6.6.2 [util.smartptr.shared] Status: New Submitter: Peter Dimov Date: 2007-05-05

View all other issues in [util.smartptr.shared].

View all issues with New status.

Discussion:

N1856 does not propose any changes to shared_ptr. It needs to be updated to use a rvalue reference where appropriate and to interoperate with unique_ptr as it does with auto_ptr.

Proposed resolution:

Change 20.6.6.2 [util.smartptr.shared] as follows:

template<class Y> explicit shared_ptr(auto_ptr<Y>&&& r);
private:
template<class Y, class D> explicit shared_ptr(const unique_ptr<Y,D>& r);
public:
template<class Y, class D> explicit shared_ptr(unique_ptr<Y,D>&& r);
...
template<class Y> shared_ptr& operator=(auto_ptr<Y>&&& r);

private:
template<class Y, class D> shared_ptr& operator=(const unique_ptr<Y,D>& r);
public:
template<class Y, class D> shared_ptr& operator=(unique_ptr<Y,D>&& r);

Change 20.6.6.2.1 [util.smartptr.shared.const] as follows:

template<class Y> shared_ptr(auto_ptr<Y>&&& r);

Add to 20.6.6.2.1 [util.smartptr.shared.const]:

template<class Y, class D> shared_ptr(unique_ptr<Y, D>&& r);

Effects: Equivalent to shared_ptr( r.release(), r.get_deleter() ) when D is not a reference type, shared_ptr( r.release(), ref( r.get_deleter() ) ) otherwise.

Exception safety: If an exception is thrown, the constructor has no effect.

Change 20.6.6.2.3 [util.smartptr.shared.assign] as follows:

template<class Y> shared_ptr& operator=(auto_ptr<Y>&&& r);

Add to 20.6.6.2.3 [util.smartptr.shared.assign]:

template<class Y, class D> shared_ptr& operator=(unique_ptr<Y,D>&& r);

-4- Effects: Equivalent to shared_ptr(std::move(r)).swap(*this).

-5- Returns: *this.


675. Move assignment of containers

Section: 23.1 [container.requirements] Status: New Submitter: Howard Hinnant Date: 2007-05-05

View other active issues in [container.requirements].

View all other issues in [container.requirements].

View all issues with New status.

Discussion:

James Hopkin pointed out to me that if vector<T> move assignment is O(1) (just a swap) then containers such as vector<shared_ptr<ostream>> might have the wrong semantics under move assignment when the source is not truly an rvalue, but a moved-from lvalue (destructors could run late).

vector<shared_ptr<ostream>> v1;
vector<shared_ptr<ostream>> v2;
...
v1 = v2;               // #1
v1 = std::move(v2);    // #2

Move semantics means not caring what happens to the source (v2 in this example). It doesn't mean not caring what happens to the target (v1). In the above example both assignments should have the same effect on v1. Any non-shared ostream's v1 owns before the assignment should be closed, whether v1 is undergoing copy assignment or move assignment.

This implies that the semantics of move assignment of a generic container should be clear, swap instead of just swap. An alternative which could achieve the same effect would be to move assign each element. In either case, the complexity of move assignment needs to be relaxed to O(v1.size()).

The performance hit of this change is not nearly as drastic as it sounds. In practice, the target of a move assignment has always just been move constructed or move assigned from. Therefore under clear, swap semantics (in this common use case) we are still achieving O(1) complexity.

Proposed resolution:

Change 23.1 [container.requirements]:

Table 86: Container requirements
expressionreturn typeoperational semantics assertion/note pre/post-conditioncomplexity
a = rv;X& All existing elements of a are either move assigned or destructed a shall be equal to the value that rv had before this construction constant linear

676. Moving the unordered containers

Section: 23.4 [unord] Status: New Submitter: Howard Hinnant Date: 2007-05-05

View all other issues in [unord].

View all issues with New status.

Discussion:

Move semantics are missing from the unordered containers. The proposed resolution below adds move-support consistent with N1858 and the current working draft.

The current proposed resolution simply lists the requirements for each function. These might better be hoisted into the requirements table for unordered associative containers. Futhermore a mild reorganization of the container requirements could well be in order. This defect report is purposefully ignoring these larger issues and just focusing on getting the unordered containers "moved".

Proposed resolution:

Add to 23.4 [unord]:

template <class Key, class T, class Hash, class Pred, class Alloc> 
  void swap(unordered_map<Key, T, Hash, Pred, Alloc>& x, 
            unordered_map<Key, T, Hash, Pred, Alloc>& y); 

template <class Key, class T, class Hash, class Pred, class Alloc> 
  void swap(unordered_map<Key, T, Hash, Pred, Alloc>& x, 
            unordered_map<Key, T, Hash, Pred, Alloc>&& y);

template <class Key, class T, class Hash, class Pred, class Alloc> 
  void swap(unordered_map<Key, T, Hash, Pred, Alloc>&& x, 
            unordered_map<Key, T, Hash, Pred, Alloc>& y);

template <class Key, class T, class Hash, class Pred, class Alloc> 
  void swap(unordered_multimap<Key, T, Hash, Pred, Alloc>& x, 
            unordered_multimap<Key, T, Hash, Pred, Alloc>& y);

template <class Key, class T, class Hash, class Pred, class Alloc> 
  void swap(unordered_multimap<Key, T, Hash, Pred, Alloc>& x, 
            unordered_multimap<Key, T, Hash, Pred, Alloc>&& y);

template <class Key, class T, class Hash, class Pred, class Alloc> 
  void swap(unordered_multimap<Key, T, Hash, Pred, Alloc>&& x, 
            unordered_multimap<Key, T, Hash, Pred, Alloc>& y);

...

template <class Value, class Hash, class Pred, class Alloc> 
  void swap(unordered_set<Value, Hash, Pred, Alloc>& x, 
            unordered_set<Value, Hash, Pred, Alloc>& y); 

template <class Value, class Hash, class Pred, class Alloc> 
  void swap(unordered_set<Value, Hash, Pred, Alloc>& x, 
            unordered_set<Value, Hash, Pred, Alloc>&& y);

template <class Value, class Hash, class Pred, class Alloc> 
  void swap(unordered_set<Value, Hash, Pred, Alloc>&& x, 
            unordered_set<Value, Hash, Pred, Alloc>& y);

template <class Value, class Hash, class Pred, class Alloc> 
  void swap(unordered_multiset<Value, Hash, Pred, Alloc>& x, 
            unordered_multiset<Value, Hash, Pred, Alloc>& y);

template <class Value, class Hash, class Pred, class Alloc> 
  void swap(unordered_multiset<Value, Hash, Pred, Alloc>& x, 
            unordered_multiset<Value, Hash, Pred, Alloc>&& y);

template <class Value, class Hash, class Pred, class Alloc> 
  void swap(unordered_multiset<Value, Hash, Pred, Alloc>&& x, 
            unordered_multiset<Value, Hash, Pred, Alloc>& y);

unordered_map

Change 23.4.1 [unord.map]:

class unordered_map
{
    ...
    unordered_map(const unordered_map&);
    unordered_map(unordered_map&&);
    ~unordered_map();
    unordered_map& operator=(const unordered_map&);
    unordered_map& operator=(unordered_map&&);
    ...
    // modifiers 
    std::pair<iterator, bool> insert(const value_type& obj); 
    template <class P> pair<iterator, bool> insert(P&& obj);
    iterator       insert(iterator hint, const value_type& obj);
    template <class P> iterator       insert(iterator hint, P&& obj);
    const_iterator insert(const_iterator hint, const value_type& obj);
    template <class P> const_iterator insert(const_iterator hint, P&& obj);
    ...
    void swap(unordered_map&&);
    ...
    mapped_type& operator[](const key_type& k);
    mapped_type& operator[](key_type&& k);
    ...
};

template <class Key, class T, class Hash, class Pred, class Alloc> 
  void swap(unordered_map<Key, T, Hash, Pred, Alloc>& x, 
            unordered_map<Key, T, Hash, Pred, Alloc>& y);

template <class Key, class T, class Hash, class Pred, class Alloc> 
  void swap(unordered_map<Key, T, Hash, Pred, Alloc>& x, 
            unordered_map<Key, T, Hash, Pred, Alloc>&& y);

template <class Key, class T, class Hash, class Pred, class Alloc> 
  void swap(unordered_map<Key, T, Hash, Pred, Alloc>&& x, 
            unordered_map<Key, T, Hash, Pred, Alloc>& y);

Add to 23.4.1.1 [unord.map.cnstr]:

template <class InputIterator>
  unordered_map(InputIterator f, InputIterator l, 
                size_type n = implementation-defined, 
                const hasher& hf = hasher(), 
                const key_equal& eql = key_equal(), 
                const allocator_type& a = allocator_type());

Requires: If the iterator's dereference operator returns an lvalue or a const rvalue pair<key_type, mapped_type>, then both key_type and mapped_type shall be CopyConstructible.

Add to 23.4.1.2 [unord.map.elem]:

mapped_type& operator[](const key_type& k);

...

Requires: key_type shall be CopyConstructible and mapped_type shall be DefaultConstructible.

mapped_type& operator[](key_type&& k);

Effects: If the unordered_map does not already contain an element whose key is equivalent to k , inserts the value std::pair<const key_type, mapped_type>(std::move(k), mapped_type()).

Requires: mapped_type shall be DefaultConstructible.

Returns: A reference to x.second, where x is the (unique) element whose key is equivalent to k.

Add new section [unord.map.modifiers]:

pair<iterator, bool> insert(const value_type& x);
template <class P> pair<iterator, bool> insert(P&& x);
iterator       insert(iterator hint, const value_type& x);
template <class P> iterator       insert(iterator hint, P&& x);
const_iterator insert(const_iterator hint, const value_type& x);
template <class P> const_iterator insert(const_iterator hint, P&& x);
template <class InputIterator>
  void insert(InputIterator first, InputIterator last);

Requires: Those signatures taking a const value_type& parameter requires both the key_type and the mapped_type to be CopyConstructible.

P shall be convertible to value_type. If P is instantiated as a reference type, then the argument x is copied from. Otherwise x is considered to be an rvalue as it is converted to value_type and inserted into the unordered_map. Specifically, in such cases CopyConstructible is not required of key_type or mapped_type unless the conversion from P specifically requires it (e.g. if P is a tuple<const key_type, mapped_type>, then key_type must be CopyConstructible).

The signature taking InputIterator parameters requires CopyConstructible of both key_type and mapped_type if the dereferenced InputIterator returns an lvalue or const rvalue value_type.

Add to 23.4.1.3 [unord.map.swap]:

template <class Key, class T, class Hash, class Pred, class Alloc> 
  void swap(unordered_map<Key, T, Hash, Pred, Alloc>& x, 
            unordered_map<Key, T, Hash, Pred, Alloc>& y);
template <class Key, class T, class Hash, class Pred, class Alloc> 
  void swap(unordered_map<Key, T, Hash, Pred, Alloc>& x, 
            unordered_map<Key, T, Hash, Pred, Alloc>&& y);
template <class Key, class T, class Hash, class Pred, class Alloc> 
  void swap(unordered_map<Key, T, Hash, Pred, Alloc>&& x, 
            unordered_map<Key, T, Hash, Pred, Alloc>& y);

unordered_multimap

Change 23.4.2 [unord.multimap]:

class unordered_multimap
{
    ...
    unordered_multimap(const unordered_multimap&);
    unordered_multimap(unordered_multimap&&);
    ~unordered_multimap();
    unordered_multimap& operator=(const unordered_multimap&);
    unordered_multimap& operator=(unordered_multimap&&);
    ...
    // modifiers 
    iterator insert(const value_type& obj); 
    template <class P> iterator insert(P&& obj);
    iterator       insert(iterator hint, const value_type& obj);
    template <class P> iterator       insert(iterator hint, P&& obj);
    const_iterator insert(const_iterator hint, const value_type& obj);
    template <class P> const_iterator insert(const_iterator hint, P&& obj);
    ...
    void swap(unordered_multimap&&);
    ...
};

template <class Key, class T, class Hash, class Pred, class Alloc> 
  void swap(unordered_multimap<Key, T, Hash, Pred, Alloc>& x, 
            unordered_multimap<Key, T, Hash, Pred, Alloc>& y);

template <class Key, class T, class Hash, class Pred, class Alloc> 
  void swap(unordered_multimap<Key, T, Hash, Pred, Alloc>& x, 
            unordered_multimap<Key, T, Hash, Pred, Alloc>&& y);

template <class Key, class T, class Hash, class Pred, class Alloc> 
  void swap(unordered_multimap<Key, T, Hash, Pred, Alloc>&& x, 
            unordered_multimap<Key, T, Hash, Pred, Alloc>& y);

Add to 23.4.2.1 [unord.multimap.cnstr]:

template <class InputIterator>
  unordered_multimap(InputIterator f, InputIterator l, 
                size_type n = implementation-defined, 
                const hasher& hf = hasher(), 
                const key_equal& eql = key_equal(), 
                const allocator_type& a = allocator_type());

Requires: If the iterator's dereference operator returns an lvalue or a const rvalue pair<key_type, mapped_type>, then both key_type and mapped_type shall be CopyConstructible.

Add new section [unord.multimap.modifiers]:

iterator insert(const value_type& x);
template <class P> iterator       insert(P&& x);
iterator       insert(iterator hint, const value_type& x);
template <class P> iterator       insert(iterator hint, P&& x);
const_iterator insert(const_iterator hint, const value_type& x);
template <class P> const_iterator insert(const_iterator hint, P&& x);
template <class InputIterator>
  void insert(InputIterator first, InputIterator last);

Requires: Those signatures taking a const value_type& parameter requires both the key_type and the mapped_type to be CopyConstructible.

P shall be convertible to value_type. If P is instantiated as a reference type, then the argument x is copied from. Otherwise x is considered to be an rvalue as it is converted to value_type and inserted into the unordered_multimap. Specifically, in such cases CopyConstructible is not required of key_type or mapped_type unless the conversion from P specifically requires it (e.g. if P is a tuple<const key_type, mapped_type>, then key_type must be CopyConstructible).

The signature taking InputIterator parameters requires CopyConstructible of both key_type and mapped_type if the dereferenced InputIterator returns an lvalue or const rvalue value_type.

Add to 23.4.2.2 [unord.multimap.swap]:

template <class Key, class T, class Hash, class Pred, class Alloc> 
  void swap(unordered_multimap<Key, T, Hash, Pred, Alloc>& x, 
            unordered_multimap<Key, T, Hash, Pred, Alloc>& y);
template <class Key, class T, class Hash, class Pred, class Alloc> 
  void swap(unordered_multimap<Key, T, Hash, Pred, Alloc>& x, 
            unordered_multimap<Key, T, Hash, Pred, Alloc>&& y);
template <class Key, class T, class Hash, class Pred, class Alloc> 
  void swap(unordered_multimap<Key, T, Hash, Pred, Alloc>&& x, 
            unordered_multimap<Key, T, Hash, Pred, Alloc>& y);

unordered_set

Change 23.4.3 [unord.set]:

class unordered_set
{
    ...
    unordered_set(const unordered_set&);
    unordered_set(unordered_set&&);
    ~unordered_set();
    unordered_set& operator=(const unordered_set&);
    unordered_set& operator=(unordered_set&&);
    ...
    // modifiers 
    std::pair<iterator, bool> insert(const value_type& obj); 
    pair<iterator, bool> insert(value_type&& obj);
    iterator       insert(iterator hint, const value_type& obj);
    iterator       insert(iterator hint, value_type&& obj);
    const_iterator insert(const_iterator hint, const value_type& obj);
    const_iterator insert(const_iterator hint, value_type&& obj);
    ...
    void swap(unordered_set&&);
    ...
};

template <class Key, class T, class Hash, class Pred, class Alloc> 
  void swap(unordered_set<Key, T, Hash, Pred, Alloc>& x, 
            unordered_set<Key, T, Hash, Pred, Alloc>& y);

template <class Key, class T, class Hash, class Pred, class Alloc> 
  void swap(unordered_set<Key, T, Hash, Pred, Alloc>& x, 
            unordered_set<Key, T, Hash, Pred, Alloc>&& y);

template <class Key, class T, class Hash, class Pred, class Alloc> 
  void swap(unordered_set<Key, T, Hash, Pred, Alloc>&& x, 
            unordered_set<Key, T, Hash, Pred, Alloc>& y);

Add to 23.4.3.1 [unord.set.cnstr]:

template <class InputIterator>
  unordered_set(InputIterator f, InputIterator l, 
                size_type n = implementation-defined, 
                const hasher& hf = hasher(), 
                const key_equal& eql = key_equal(), 
                const allocator_type& a = allocator_type());

Requires: If the iterator's dereference operator returns an lvalue or a const rvalue value_type, then the value_type shall be CopyConstructible.

Add new section [unord.set.modifiers]:

pair<iterator, bool> insert(const value_type& x);
pair<iterator, bool> insert(value_type&& x);
iterator       insert(iterator hint, const value_type& x);
iterator       insert(iterator hint, value_type&& x);
const_iterator insert(const_iterator hint, const value_type& x);
const_iterator insert(const_iterator hint, value_type&& x);
template <class InputIterator>
  void insert(InputIterator first, InputIterator last);

Requires: Those signatures taking a const value_type& parameter requires the value_type to be CopyConstructible.

The signature taking InputIterator parameters requires CopyConstructible of value_type if the dereferenced InputIterator returns an lvalue or const rvalue value_type.

Add to 23.4.3.2 [unord.set.swap]:

template <class Key, class T, class Hash, class Pred, class Alloc> 
  void swap(unordered_set<Key, T, Hash, Pred, Alloc>& x, 
            unordered_set<Key, T, Hash, Pred, Alloc>& y);
template <class Key, class T, class Hash, class Pred, class Alloc> 
  void swap(unordered_set<Key, T, Hash, Pred, Alloc>& x, 
            unordered_set<Key, T, Hash, Pred, Alloc>&& y);
template <class Key, class T, class Hash, class Pred, class Alloc> 
  void swap(unordered_set<Key, T, Hash, Pred, Alloc>&& x, 
            unordered_set<Key, T, Hash, Pred, Alloc>& y);

unordered_multiset

Change 23.4.4 [unord.multiset]:

class unordered_multiset
{
    ...
    unordered_multiset(const unordered_multiset&);
    unordered_multiset(unordered_multiset&&);
    ~unordered_multiset();
    unordered_multiset& operator=(const unordered_multiset&);
    unordered_multiset& operator=(unordered_multiset&&);
    ...
    // modifiers 
    iterator insert(const value_type& obj); 
    iterator insert(value_type&& obj);
    iterator       insert(iterator hint, const value_type& obj);
    iterator       insert(iterator hint, value_type&& obj);
    const_iterator insert(const_iterator hint, const value_type& obj);
    const_iterator insert(const_iterator hint, value_type&& obj);
    ...
    void swap(unordered_multiset&&);
    ...
};

template <class Key, class T, class Hash, class Pred, class Alloc> 
  void swap(unordered_multiset<Key, T, Hash, Pred, Alloc>& x, 
            unordered_multiset<Key, T, Hash, Pred, Alloc>& y);

template <class Key, class T, class Hash, class Pred, class Alloc> 
  void swap(unordered_multiset<Key, T, Hash, Pred, Alloc>& x, 
            unordered_multiset<Key, T, Hash, Pred, Alloc>&& y);

template <class Key, class T, class Hash, class Pred, class Alloc> 
  void swap(unordered_multiset<Key, T, Hash, Pred, Alloc>&& x, 
            unordered_multiset<Key, T, Hash, Pred, Alloc>& y);

Add to 23.4.4.1 [unord.multiset.cnstr]:

template <class InputIterator>
  unordered_multiset(InputIterator f, InputIterator l, 
                size_type n = implementation-defined, 
                const hasher& hf = hasher(), 
                const key_equal& eql = key_equal(), 
                const allocator_type& a = allocator_type());

Requires: If the iterator's dereference operator returns an lvalue or a const rvalue value_type, then the value_type shall be CopyConstructible.

Add new section [unord.multiset.modifiers]:

iterator insert(const value_type& x);
iterator insert(value_type&& x);
iterator       insert(iterator hint, const value_type& x);
iterator       insert(iterator hint, value_type&& x);
const_iterator insert(const_iterator hint, const value_type& x);
const_iterator insert(const_iterator hint, value_type&& x);
template <class InputIterator>
  void insert(InputIterator first, InputIterator last);

Requires: Those signatures taking a const value_type& parameter requires the value_type to be CopyConstructible.

The signature taking InputIterator parameters requires CopyConstructible of value_type if the dereferenced InputIterator returns an lvalue or const rvalue value_type.

Add to 23.4.4.2 [unord.multiset.swap]:

template <class Key, class T, class Hash, class Pred, class Alloc> 
  void swap(unordered_multiset<Key, T, Hash, Pred, Alloc>& x, 
            unordered_multiset<Key, T, Hash, Pred, Alloc>& y);
template <class Key, class T, class Hash, class Pred, class Alloc> 
  void swap(unordered_multiset<Key, T, Hash, Pred, Alloc>& x, 
            unordered_multiset<Key, T, Hash, Pred, Alloc>&& y);
template <class Key, class T, class Hash, class Pred, class Alloc> 
  void swap(unordered_multiset<Key, T, Hash, Pred, Alloc>&& x, 
            unordered_multiset<Key, T, Hash, Pred, Alloc>& y);

677. Weaknesses in seed_seq::randomize [rand.util.seedseq]

Section: 26.4.7.1 [rand.util.seedseq] Status: New Submitter: Charles Karney Date: 2007-05-15

View other active issues in [rand.util.seedseq].

View all other issues in [rand.util.seedseq].

View all issues with New status.

Discussion:

seed_seq::randomize provides a mechanism for initializing random number engines which ideally would yield "distant" states when given "close" seeds. The algorithm for seed_seq::randomize given in the current Working Draft for C++, N2284 (2007-05-08), has 3 weaknesses

  1. Collisions in state. Because of the way the state is initialized, seeds of different lengths may result in the same state. The current version of seed_seq has the following properties:

    The proposed algorithm (below) has the considerably stronger properties:

  2. Poor mixing of v's entropy into the state. Consider v.size() == n and hold v[n/2] thru v[n-1] fixed while varying v[0] thru v[n/2-1], a total of 2^(16n) possibilities. Because of the simple recursion used in seed_seq, begin[n/2] thru begin[n-1] can take on only 2^64 possible states.

    The proposed algorithm uses a more complex recursion which results in much better mixing.

  3. seed_seq::randomize is undefined for v.size() == 0. The proposed algorithm remedies this.

The current algorithm for seed_seq::randomize is adapted by me from the initialization procedure for the Mersenne Twister by Makoto Matsumoto and Takuji Nishimura. The weakness (2) given above was communicated to me by Matsumoto last year.

The proposed replacement for seed_seq::randomize is due to Mutsuo Saito, a student of Matsumoto, and is given in the implementation of the SIMD-oriented Fast Mersenne Twister random number generator SFMT. http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/SFMT/index.html http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/SFMT/SFMT-src-1.2.tar.gz

See Mutsuo Saito, An Application of Finite Field: Design and Implementation of 128-bit Instruction-Based Fast Pseudorandom Number Generator, Master's Thesis, Dept. of Math., Hiroshima University (Feb. 2007) http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/SFMT/M062821.pdf

One change has been made here, namely to treat the case of small n (setting t = (n-1)/2 for n < 7).

Since seed_seq was introduced relatively recently there is little cost in making this incompatible improvement to it.

Proposed resolution:

The following is the proposed replacement of paragraph 8 of section 26.4.7.1 [rand.util.seedseq]:

-8- Effects: With s = v.size() and n = end - begin, fills the supplied range [begin, end) according to the following algorithm in which each operation is to be carried out modulo 232, each indexing operator applied to begin is to be taken modulo n, each indexing operator applied to v is to be taken modulo s, and T(x) is defined as x xor (x rshift 30 27):

  1. Set begin[0] to 5489 + s. Then, iteratively for k = 1, ... , n - 1, set begin[k] to

    1812433253 * T(begin[k-1]) + k.
    
  2. With m as the larger of s and n, transform each element of the range (possibly more than once): iteratively for k = 0, ..., m - 1, set begin[k] to

    (begin[k] xor (1664525 * T(begin[k-1]))) + v[k] + (k mod s).
    
  3. Transform each element of the range one last time, beginning where the previous step ended: iteratively for k = m mod n, ..., n - 1, 0, ..., (m - 1) mod n, set begin[k] to

    (begin[k] xor (1566083941 * T(begin[k-1]))) - k.
    

fill(begin, end, 0x8b8b8b8b);

if (n >= 623)
    t = 11;
else if (n >= 68)
    t = 7;
else if (n >= 39)
    t = 5;
else if (n >= 7)
    t = 3;
else
    t = (n-1)/2;

p = (n-t)/2;
q = p+t;

m = max(s+1, n);
for (k = 0; k < m; k += 1) {
    r = 1664525 * T(begin[k] ^ begin[k+p] ^ begin[k-1]);
    begin[k+p] += r;
    r += k % n;
    if (k == 0)
        r += s;
    else if (k <= s)
        r += v[k-1];
    begin[k+q] += r;
    begin[k] = r;
}

for (k = m; k < m + n; k += 1) {
    r = 1566083941 * T(begin[k] + begin[k+p] + begin[k-1]);
    begin[k+p] ^= r;
    r -= k % n;
    begin[k+q] ^= r;
    begin[k] = r;
}

678. Changes for [rand.req.eng]

Section: 26.4.1.3 [rand.req.eng] Status: New Submitter: Charles Karney Date: 2007-05-15

View other active issues in [rand.req.eng].

View all other issues in [rand.req.eng].

View all issues with New status.

Discussion:

Section 26.4.1.3 [rand.req.eng] Random number engine requirements:

This change follows naturally from the proposed change to seed_seq::randomize in 677.

In table 104 the description of X(q) contains a special treatment of the case q.size() == 0. This is undesirable for 4 reasons:

  1. It replicates the functionality provided by X().
  2. It leads to the possibility of a collision in the state provided by some other X(q) with q.size() > 0.
  3. It is inconsistent with the description of the X(q) in paragraphs 26.4.3.1 [rand.eng.lcong] p5, 26.4.3.2 [rand.eng.mers] p8, and 26.4.3.3 [rand.eng.sub] p10 where there is no special treatment of q.size() == 0.
  4. The proposed replacement for seed_seq::randomize given above allows for the case q.size() == 0.

Proposed resolution:

I recommend removing the special-case treatment of q.size() == 0. Here is the replacement line for table 104 of section 26.4.1.3 [rand.req.eng]:

X(q)272 - With n = q.size(), creates an engine u with initial state determined as follows: If n is 0, u == X(); otherwise, the Create an engine u with an initial state which depends on a sequence produced by one call to q.randomize. O(max(n q.size(), size of state))

679. resize parameter by value

Section: 23.2 [sequences] Status: New Submitter: Howard Hinnant Date: 2007-06-11

View all issues with New status.

Discussion:

The C++98 standard specifies that one member function alone of the containers passes its parameter (T) by value instead of by const reference:

void resize(size_type sz, T c = T());

This fact has been discussed / debated repeatedly over the years, the first time being even before C++98 was ratified. The rationale for passing this parameter by value has been:

So that self referencing statements are guaranteed to work, for example:

v.resize(v.size() + 1, v[0]);

However this rationale is not convincing as the signature for push_back is:

void push_back(const T& x);

And push_back has similar semantics to resize (append). And push_back must also work in the self referencing case:

v.push_back(v[0]);  // must work

The problem with passing T by value is that it can be significantly more expensive than passing by reference. The converse is also true, however when it is true it is usually far less dramatic (e.g. for scalar types).

Even with move semantics available, passing this parameter by value can be expensive. Consider for example vector<vector<int>>:

std::vector<int> x(1000);
std::vector<std::vector<int>> v;
...
v.resize(v.size()+1, x);

In the pass-by-value case, x is copied once to the parameter of resize. And then internally, since the code can not know at compile time by how much resize is growing the vector, x is usually copied (not moved) a second time from resize's parameter into its proper place within the vector.

With pass-by-const-reference, the x in the above example need be copied only once. In this case, x has an expensive copy constructor and so any copies that can be saved represents a significant savings.

If we can be efficient for push_back, we should be efficient for resize as well. The resize taking a reference parameter has been coded and shipped in the CodeWarrior library with no reports of problems which I am aware of.

Proposed resolution:

Change 23.2.2 [deque], p2:

class deque {
   ...
   void resize(size_type sz, const T& c);

Change 23.2.2.2 [deque.capacity], p3:

void resize(size_type sz, const T& c);

Proposed resolution:

Change 23.2.3 [list], p2:

class list {
   ...
   void resize(size_type sz, const T& c);

Change 23.2.3.2 [list.capacity], p3:

void resize(size_type sz, const T& c);

Proposed resolution:

Change 23.2.5 [vector], p2:

class vector {
   ...
   void resize(size_type sz, const T& c);

Change 23.2.5.2 [vector.capacity], p11:

void resize(size_type sz, const T& c);

680. move_iterator operator-> return

Section: 24.4.3.1 [move.iterator] Status: Open Submitter: Howard Hinnant Date: 2007-06-11

View all issues with Open status.

Discussion:

move_iterator's operator-> return type pointer does not consistently match the type which is returned in the description in 24.4.3.3.5 [move.iter.op.ref].

template <class Iterator>
class move_iterator {
public:
    ...
    typedef typename iterator_traits<Iterator>::pointer pointer;
    ...
    pointer operator->() const {return current;}
    ...
private: 
    Iterator current; // exposition only
};

There are two possible fixes.

  1. pointer operator->() const {return &*current;}
  2. typedef Iterator pointer;

The first solution is the one chosen by reverse_iterator. A potential disadvantage of this is it may not work well with iterators which return a proxy on dereference and that proxy has overloaded operator&(). Proxy references often need to overloaad operator&() to return a proxy pointer. That proxy pointer may or may not be the same type as the iterator's pointer type.

By simply returning the Iterator and taking advantage of the fact that the language forwards calls to operator-> automatically until it finds a non-class type, the second solution avoids the issue of an overloaded operator&() entirely.

Proposed resolution:

Change the synopsis in 24.4.3.1 [move.iterator]:

typedef typename iterator_traits<Iterator>::pointer pointer;

681. Operator functions impossible to compare are defined in [re.submatch.op]

Section: 28.9.2 [re.submatch.op] Status: New Submitter: Nozomu Katoo Date: 2007-05-27

View all issues with New status.

Discussion:

In 28.9.2 [re.submatch.op] of N2284, operator functions numbered 31-42 seem impossible to compare.  E.g.:

template <class BiIter>
    bool operator==(typename iterator_traits<BiIter>::value_type const& lhs,
                    const sub_match<BiIter>& rhs);

-31- Returns: lhs == rhs.str().

When char* is used as BiIter, iterator_traits<BiIter>::value_type would be char, so that lhs == rhs.str() ends up comparing a char value and an object of std::basic_string<char>.  However, the behaviour of comparison between these two types is not defined in 21.3.8 [string.nonmembers] of N2284.  This applies when wchar_t* is used as BiIter.

Proposed resolution:


682. basic_regex ctor takes InputIterator or ForwardIterator?

Section: 28.8.2 [re.regex.construct] Status: New Submitter: Eric Niebler Date: 2007-06-03

View all issues with New status.

Discussion:

Looking at N2284, 28.8 [re.regex], p3 basic_regex class template synopsis shows this constructor:

template <class InputIterator>
     basic_regex(InputIterator first, InputIterator last, 
                 flag_type f = regex_constants::ECMAScript);

In 28.8.2 [re.regex.construct], p15, the constructor appears with this signature:

template <class ForwardIterator>
     basic_regex(ForwardIterator first, ForwardIterator last, 
                 flag_type f = regex_constants::ECMAScript);

ForwardIterator is probably correct, so the synopsis is wrong.

[ John adds: ]

I think either could be implemented?  Although an input iterator would probably require an internal copy of the string being made.

I have no strong feelings either way, although I think my original intent was InputIterator.

Proposed resolution:


684. Unclear which members of match_results should be used in comparison

Section: 28.10 [re.results] Status: New Submitter: Nozomu Katoo Date: 2007-05-27

View other active issues in [re.results].

View all other issues in [re.results].

View all issues with New status.

Discussion:

In 28.4 [re.syn] of N2284, two template functions are declared here:

// 28.10, class template match_results: 
  <snip>
// match_results comparisons 
  template <class BidirectionalIterator, class Allocator> 
    bool operator== (const match_results<BidirectionalIterator, Allocator>& m1, 
                     const match_results<BidirectionalIterator, Allocator>& m2); 
  template <class BidirectionalIterator, class Allocator> 
    bool operator!= (const match_results<BidirectionalIterator, Allocator>& m1, 
                     const match_results<BidirectionalIterator, Allocator>& m2); 

// 28.10.6, match_results swap:

But the details of these two bool operator functions (i.e., which members of match_results should be used in comparison) are not described in any following sections.

[ John adds: ]

That looks like a bug: operator== should return true only if the two objects refer to the same match - ie if one object was constructed as a copy of the other.

Proposed resolution:


685. reverse_iterator/move_iterator difference has invalid signatures

Section: 24.4.1.3.19 [reverse.iter.opdiff], 24.4.3.3.14 [move.iter.nonmember] Status: New Submitter: Bo Persson Date: 2007-06-10

View all issues with New status.

Discussion:

In C++03 the difference between two reverse_iterators

ri1 - ri2

is possible to compute only if both iterators have the same base iterator. The result type is the difference_type of the base iterator.

In the current draft, the operator is defined as 24.4.1.3.19 [reverse.iter.opdiff]

template<class Iterator1, class Iterator2> 
typename reverse_iterator<Iterator>::difference_type 
   operator-(const reverse_iterator<Iterator1>& x, 
                    const reverse_iterator<Iterator2>& y);

The return type is the same as the C++03 one, based on the no longer present Iterator template parameter.

Besides being slightly invalid, should this operator work only when Iterator1 and Iterator2 has the same difference_type? Or should the implementation choose one of them? Which one?

The same problem now also appears in operator-() for move_iterator 24.4.3.3.14 [move.iter.nonmember].

Proposed resolution:


686. Unique_ptr and shared_ptr fail to specify non-convertibility to int for unspecified-bool-type

Section: 20.6.5.2.4 [unique.ptr.single.observers], 20.6.6.2.5 [util.smartptr.shared.obs] Status: New Submitter: Beman Dawes Date: 2007-06-14

View all issues with New status.

Discussion:

The standard library uses the operator unspecified-bool-type() const idiom in five places. In three of those places (20.5.14.2.3 [func.wrap.func.cap], function capacity for example) the returned value is constrained to disallow unintended conversions to int. The standardese is

The return type shall not be convertible to int.

This constraint is omitted for unique_ptr and shared_ptr. It should be added for those.

Proposed resolution:

To the Returns paragraph for operator unspecified-bool-type() const of 20.6.5.2.4 [unique.ptr.single.observers] paragraph 11 and 20.6.6.2.5 [util.smartptr.shared.obs] paragraph 16, add the sentence:

The return type shall not be convertible to int.


687. shared_ptr conversion constructor not constrained

Section: 20.6.6.2.1 [util.smartptr.shared.const], 20.6.6.3.1 [util.smartptr.weak.const] Status: New Submitter: Peter Dimov Date: 2007-05-10

View all issues with New status.

Discussion:

Since all conversions from shared_ptr<T> to shared_ptr<U> have the same rank regardless of the relationship between T and U, reasonable user code that works with raw pointers fails with shared_ptr:

void f( shared_ptr<void> );
void f( shared_ptr<int> );

int main()
{
  f( shared_ptr<double>() ); // ambiguous
}

Now that we officially have enable_if, we can constrain the constructor and the corresponding assignment operator to only participate in the overload resolution when the pointer types are compatible.

Proposed resolution:

In 20.6.6.2.1 [util.smartptr.shared.const], change:

-14- Requires: For the second constructor The second constructor shall not participate in the overload resolution unless Y* shall be is convertible to T*.

In 20.6.6.3.1 [util.smartptr.weak.const], change:

template<class Y> weak_ptr(shared_ptr<Y> const& r);
weak_ptr(weak_ptr const& r);
template<class Y> weak_ptr(weak_ptr<Y> const& r);
weak_ptr(weak_ptr const& r);
template<class Y> weak_ptr(weak_ptr<Y> const& r);
template<class Y> weak_ptr(shared_ptr<Y> const& r);

-4- Requires: For tThe second and third constructors, shall not participate in the overload resolution unless Y* shall be is convertible to T*.


688. reference_wrapper, cref unsafe, allow binding to rvalues

Section: 20.5.5.1 [refwrap.const] Status: New Submitter: Peter Dimov Date: 2007-05-10

View other active issues in [refwrap.const].

View all other issues in [refwrap.const].

View all issues with New status.

Discussion:

A reference_wrapper can be constructed from an rvalue, either by using the constructor, or via cref (and ref in some corner cases). This leads to a dangling reference being stored into the reference_wrapper object. Now that we have a mechanism to detect an rvalue, we can fix them to disallow this source of undefined behavior.

Also please see the thread starting at c++std-lib-17398 for some good discussion on this subject.

Proposed resolution:

In 20.5.5 [refwrap], add:

private:
  explicit reference_wrapper(T&&);

In 20.5.5.1 [refwrap.const], add:

explicit reference_wrapper(T&&);

-?- Not defined to disallow creating a reference_wrapper from an rvalue.

In the synopsis of <functional> (20.5.5 [refwrap]), change the declarations of ref and cref to:

template<class T> reference_wrapper<T> ref(T&&);
template<class T> reference_wrapper<const T> cref(const T&&);

In 20.5.5.5 [refwrap.helpers], change:

template<class T> reference_wrapper<T> ref(T&& t);

-1- Requires: t shall be an lvalue.

and change:

template<class T> reference_wrapper<const T> cref(const T&& t);

-6- Requires: t shall be an lvalue.

[ N2292 addresses the first part of the resolution but not the second. ]


689. reference_wrapper constructor overly constrained

Section: 20.5.5.1 [refwrap.const] Status: New Submitter: Peter Dimov Date: 2007-05-10

View other active issues in [refwrap.const].

View all other issues in [refwrap.const].

View all issues with New status.

Discussion:

The constructor of reference_wrapper is currently explicit. The primary motivation behind this is the safety problem with respect to rvalues, which is addressed by the proposed resolution of the previous issue. Therefore we should consider relaxing the requirements on the constructor since requests for the implicit conversion keep resurfacing.

Also please see the thread starting at c++std-lib-17398 for some good discussion on this subject.

Proposed resolution:

Remove the explicit from the constructor of reference_wrapper. If the proposed resolution of the previous issue is accepted, remove the explicit from the T&& constructor as well to keep them in sync.


691. const_local_iterator cbegin, cend missing from TR1

Section: TR1 6.3 [tr.hash] Status: New Submitter: Joaquín M López Muñoz Date: 2007-06-14

View all issues with New status.

Discussion:

The last version of TR1 does not include the following member functions for unordered containers:

const_local_iterator cbegin(size_type n) const;
const_local_iterator cend(size_type n) const;

which looks like an oversight to me. I've checked th TR1 issues lists and the latest working draft of the C++0x std (N2284) and haven't found any mention to these menfuns or to their absence.

Is this really an oversight, or am I missing something?

[ post-Toronto, Howard: I initially targeted this to C++0X, but now noting those members are in the WP, retargeting to TR1. ]

Proposed resolution:


692. get_money and put_money should be formatted I/O functions

Section: 27.6.4 [ext.manip] Status: New Submitter: Martin Sebor Date: 2007-06-22

View all other issues in [ext.manip].

View all issues with New status.

Discussion:

In a private email Bill Plauger notes:

I  believe that  the function  that  implements get_money [from N2072] should behave  as a  formatted input function,  and the  function that implements put_money should  behave as a formatted output function. This  has implications regarding the  skipping of whitespace and the handling of errors, among other things.

The words  don't say that  right now and  I'm far from  convinced that such a change is editorial.

Martin's response:

I agree that the manipulators should handle exceptions the same way as formatted I/O functions do. The text in N2072 assumes so but the Returns clause explicitly omits exception handling for the sake of brevity. The spec should be clarified to that effect.

As for dealing  with whitespace, I also agree it  would make sense for the extractors  and inserters involving the new  manipulators to treat it the same way as formatted I/O.

Proposed resolution:

Add  a new  paragraph immediately  above  p4 of 27.6.4 [ext.manip] with  the following text:

Effects:  The   expression  in >> get_money(mon, intl) described below behaves as a formatted input function (as described in 27.6.1.2.1 [istream.formatted.reqmts]).

Also change p4 of 27.6.4 [ext.manip] as follows:

Returns: An object s of unspecified type such that if in is  an object of type basic_istream<charT, traits>    then    the    expression   in >> get_money(mon, intl) behaves as a formatted input function that    calls    f(in, mon, intl)    were called. The function f can be defined as...


693. std::bitset::all() missing

Section: 23.3.5 [template.bitset] Status: New Submitter: Martin Sebor Date: 2007-06-22

View other active issues in [template.bitset].

View all other issues in [template.bitset].

View all issues with New status.

Discussion:

The bitset class template provides the member function any() to determine whether an object of the type has any bits set, and the member function none() to determine whether all of an object's bits are clear. However, the template does not provide a corresponding function to discover whether a bitset object has all its bits set. While it is possible, even easy, to obtain this information by comparing the result of count() with the result of size() for equality (i.e., via b.count() == b.size()) the operation is less efficient than a member function designed specifically for that purpose could be. (count() must count all non-zero bits in a bitset a word at a time while all() could stop counting as soon as it encountered the first word with a zero bit).

Proposed resolution:

Add a declaration of the new member function all() to the defintion of the bitset template in 23.3.5 [template.bitset], p1, right above the declaration of any() as shown below:

bool operator!=(const bitset<N>& rhs) const;
bool test(size_t pos) const;
bool all() const;
bool any() const;
bool none() const;

Add a description of the new member function to the end of 23.3.5.2 [bitset.members] with the following text:

bool all() const;

Returns: count() == b.size().

In addition, change the description of any() and none() for consistency with all() as follows:

bool any() const;

Returns: true if any bit in *this is onecount() != 0.

bool none() const;

Returns: true if no bit in *this is onecount() == 0.


694. std::bitset and long long

Section: 23.3.5 [template.bitset] Status: New Submitter: Martin Sebor Date: 2007-06-22

View other active issues in [template.bitset].

View all other issues in [template.bitset].

View all issues with New status.

Discussion:

Objects of the bitset class template specializations can be constructed from and explicitly converted to values of the widest C++ integer type, unsigned long. With the introduction of long long into the language the template should be enhanced to make it possible to interoperate with values of this type as well, or perhaps uintmax_t. See c++std-lib-18274 for a brief discussion in support of this change.

Proposed resolution:

For simplicity, instead of adding overloads for unsigned long long and dealing with possible ambiguities in the spec, replace the bitset ctor that takes an unsigned long argument with one taking unsigned long long in the definition of the template as shown below. (The standard permits implementations to add overloads on other integer types or employ template tricks to achieve the same effect provided they don't cause ambiguities or changes in behavior.)

// [bitset.cons] constructors:
bitset();
bitset(unsigned long long val);
template<class charT, class traits, class Allocator>
explicit bitset(
                const basic_string<charT,traits,Allocator>& str,
                typename basic_string<charT,traits,Allocator>::size_type pos = 0,
                typename basic_string<charT,traits,Allocator>::size_type n =
                    basic_string<charT,traits,Allocator>::npos);

Make a corresponding change in 23.3.5.1 [bitset.cons], p2:

bitset(unsigned long long val);

Effects: Constructs an object of class bitset<N>, initializing the first M bit positions to the corresponding bit values in val. M is the smaller of N and the number of bits in the value representation (section [basic.types]) of unsigned long long. If M < N is true, the remaining bit positions are initialized to zero.

Additionally, introduce a new member function to_ullong() to make it possible to convert bitset to values of the new type. Add the following declaration to the definition of the template, immediate after the declaration of to_ulong() in 23.3.5 [template.bitset], p1, as shown below:

// element access:
bool operator[](size_t pos) const; // for b[i];
reference operator[](size_t pos); // for b[i];
unsigned long to_ulong() const;
unsigned long long to_ullong() const;
template <class charT, class traits, class Allocator>
basic_string<charT, traits, Allocator> to_string() const;

And add a description of the new member function to 23.3.5.2 [bitset.members], below the description of the existing to_ulong() (if possible), with the following text:

unsigned long long to_ullong() const;

Throws: overflow_error if the integral value x corresponding to the bits in *this cannot be represented as type unsigned long long.
Returns: x.

695. ctype<char>::classic_table() not accessible

Section: 22.2.1.3 [facet.ctype.special] Status: New Submitter: Martin Sebor Date: 2007-06-22

View all issues with New status.

Discussion:

The ctype<char>::classic_table() static member function returns a pointer to an array of const ctype_base::mask objects (enums) that contains ctype<char>::table_size elements. The table describes the properties of the character set in the "C" locale (i.e., whether a character at an index given by its value is alpha, digit, punct, etc.), and is typically used to initialize the ctype<char> facet in the classic "C" locale (the protected ctype<char> member function table() then returns the same value as classic_table()).

However, while ctype<char>::table_size (the size of the table) is a public static const member of the ctype<char> specialization, the classic_table() static member function is protected. That makes getting at the classic data less than convenient (i.e., one has to create a whole derived class just to get at the masks array). It makes little sense to expose the size of the table in the public interface while making the table itself protected, especially when the table is a constant object.

The same argument can be made for the non-static protected member function table().

Proposed resolution:

Make the ctype<char>::classic_table() and ctype<char>::table() member functions public by moving their declarations into the public section of the definition of specialization in 22.2.1.3 [facet.ctype.special] as shown below:

  static locale::id id;
  static const size_t table_size = IMPLEMENTATION_DEFINED;
protected:
  const mask* table() const throw();
  static const mask* classic_table() throw();
protected:

~ctype(); // virtual
virtual char do_toupper(char c) const;

696. istream::operator>>(int&) broken

Section: 27.6.1.2.2 [istream.formatted.arithmetic] Status: New Submitter: Martin Sebor Date: 2007-06-23

View other active issues in [istream.formatted.arithmetic].

View all other issues in [istream.formatted.arithmetic].

View all issues with New status.

Discussion:

From message c++std-lib-17897:

The code shown in 27.6.1.2.2 [istream.formatted.arithmetic] as the "as if" implementation of the two arithmetic extractors that don't have a corresponding num_get interface (i.e., the short and int overloads) is subtly buggy in how it deals with EOF, overflow, and other similar conditions (in addition to containing a few typos).

One problem is that if num_get::get() reaches the EOF after reading in an otherwise valid value that exceeds the limits of the narrower type (but not LONG_MIN or LONG_MAX), it will set err to eofbit. Because of the if condition testing for (err == 0), the extractor won't set failbit (and presumably, return a bogus value to the caller).

Another problem with the code is that it never actually sets the argument to the extracted value. It can't happen after the call to setstate() since the function may throw, so we need to show when and how it's done (we can't just punt as say: "it happens afterwards"). However, it turns out that showing how it's done isn't quite so easy since the argument is normally left unchanged by the facet on error except when the error is due to a misplaced thousands separator, which causes failbit to be set but doesn't prevent the facet from storing the value.

Proposed resolution:


697. New <system_error> header leads to name clashes

Section: 19.4 [syserr] Status: New Submitter: Daniel Krügler Date: 2007-06-24

View all issues with New status.

Discussion:

The most recent state of N2241 as well as the current draft N2284 (section 19.4 [syserr], p.2) proposes a new enumeration type posix_errno immediatly in the namespace std. One of the enumerators has the name invalid_argument, or fully qualified: std::invalid_argument. This name clashes with the exception type std::invalid_argument, see 19.1 [std.exceptions]/p.3. This clash makes e.g. the following snippet invalid:

#include <system_error>
#include <stdexcept>

void foo() { throw std::invalid_argument("Don't call us - we call you!"); }

I propose that this enumeration type (and probably the remaining parts of <system_error> as well) should be moved into one additional inner namespace, e.g. sys or system to reduce foreseeable future clashes due to the great number of members that std::posix_errno already contains (Btw.: Why has the already proposed std::sys sub-namespace from N2066 been rejected?). A further clash candidate seems to be std::protocol_error (a reasonable name for an exception related to a std network library, I guess).

Another possible resolution would rely on the proposed strongly typed enums, as described in N2213. But maybe the forbidden implicit conversion to integral types would make these enumerators less attractive in this special case?

Proposed resolution:


698. Some system_error issues

Section: 19.4.3.1 [syserr.syserr.overview] Status: New Submitter: Daniel Krügler Date: 2007-06-24

View all issues with New status.

Discussion:

In 19.4.3.1 [syserr.syserr.overview] we have the class definition of std::system_error. In contrast to all exception classes, which are constructible with a what_arg string (see 19.1 [std.exceptions], or ios_base::failure in 27.4.2.1.1 [ios::failure]), only overloads with with const string& are possible. For consistency with the re-designed remaining exception classes this class should also provide c'tors which accept a const char* what_arg string.

Please note that this proposed addition makes sense even considering the given implementation hint for what(), because what_arg is required to be set as what_arg of the base class runtime_error, which now has the additional c'tor overload accepting a const char*.

Proposed resolution:


699. N2111 changes min/max

Section: 26.4 [rand] Status: New Submitter: P.J. Plauger Date: 2007-07-01

View all other issues in [rand].

View all issues with New status.

Discussion:

N2111 changes min/max in several places in random from member functions to static data members. I believe this introduces a needless backward compatibility problem between C++0X and TR1. I'd like us to find new names for the static data members, or perhaps change min/max to constexprs in C++0X.

Proposed resolution:


700. N1856 defines struct identity

Section: 20.2.2 [forward] Status: New Submitter: P.J. Plauger Date: 2007-07-01

View all issues with New status.

Discussion:

N1856 defines struct identity in <utility> which clashes with the traditional definition of struct identity in <functional> (not standard, but a common extension from old STL). Be nice if we could avoid this name clash for backward compatibility.

Proposed resolution:


701. assoc laguerre poly's

Section: TR1 5.2.1.1 [tr.num.sf.Lnm] Status: New Submitter: Christopher Crawford Date: 2007-06-30

View all issues with New status.

Discussion:

I see that the definition the associated Laguerre polynomials TR1 5.2.1.1 [tr.num.sf.Lnm] has been corrected since N1687. However, the draft standard only specifies ranks of integer value m, while the associated Laguerre polynomials are actually valid for real values of m > -1.  In the case of non-integer values of m, the definition  Ln(m) = (1/n!)exx-m (d/dx)n (e-xxm+n) must be used, which also holds for integer values of m.  See Abramowitz & Stegun, 22.11.6 for the general case, and 22.5.16-17 for the integer case.  In fact fractional values are most commonly used in physics, for example to m = +/- 1/2 to describe the harmonic oscillator in 1 dimension, and 1/2, 3/2, 5/2, ... in 3 dimensions.

If I am correct, the calculation of the more general case is no more difficult, and is in fact the function implemented in the GNU Scientific Library.  I would urge you to consider upgrading the standard, either adding extra functions for real m or switching the current ones to double.

Proposed resolution:


702. Restriction in associated Legendre functions

Section: TR1 5.2.1.2 [tr.num.sf.Plm] Status: New Submitter: Christopher Crawford Date: 2007-06-30

View all issues with New status.

Discussion:

One other small thing, in TR1 5.2.1.2 [tr.num.sf.Plm], the restriction should  be |x| <= 1, not x >= 0.

Proposed resolution:


703. map::at() need a complexity specification

Section: 23.3.1.2 [map.access] Status: New Submitter: Joe Gottman Date: 2007-07-03

View all other issues in [map.access].

View all issues with New status.

Discussion:

map::at() need a complexity specification.

Proposed resolution:

Add the following to the specification of map::at(), 23.3.1.2 [map.access]:

Complexity: logarithmic.


704. MoveAssignable requirement for container value type overly strict

Section: 23.1 [container.requirements] Status: New Submitter: Howard Hinnant Date: 2007-05-20

View other active issues in [container.requirements].

View all other issues in [container.requirements].

View all issues with New status.

Discussion:

The move-related changes inadvertently overwrote the intent of 276. Issue 276 removed the requirement of CopyAssignable from most of the member functions of node-based containers. But the move-related changes unnecessarily introduced the MoveAssignable requirement for those members which used to require CopyAssignable.

We also discussed (c++std-lib-18722) the possibility of dropping MoveAssignable from some of the sequence requirements. Additionally the in-place construction work may further reduce requirements. For purposes of an easy reference, here are the minimum sequence requirements as I currently understand them. Those items in requirements table in the working draft which do not appear below have been purposefully omitted for brevity as they do not have any requirements of this nature. Some items which do not have any requirements of this nature are included below just to confirm that they were not omitted by mistake.

Container Requirements
X u(a)value_type must be CopyConstructible
X u(rv)array requires value_type to be MoveConstructible
a = uSequences require value_type to be CopyConstructible and CopyAssignable. Associative containers require value_type to be CopyConstructible.
a = rvarray requires value_type to be MoveAssignable. Sequences with non-Swappable allocators require value_type to be MoveConstructible and MoveAssignable. Associative containers with non-Swappable allocators require value_type to be MoveConstructible.
swap(a,u)array requires value_type to be Swappable. Sequences with non-Swappable allocators require value_type to be Swappable, MoveConstructible and MoveAssignable. Associative containers with non-Swappable allocators require value_type to be MoveConstructible.

Sequence Requirements
X(n)value_type must be DefaultConstructible
X(n, t)value_type must be CopyConstructible
X(i, j)If the iterators return an lvalue the value_type must be CopyConstructible. If the iterators return an rvalue the value_type must be MoveConstructible.
a.insert(p, t)The value_type must be CopyConstructible. The sequences vector and deque also require the value_type to be CopyAssignable.
a.insert(p, rv)The value_type must be MoveConstructible. The sequences vector and deque also require the value_type to be MoveAssignable.
a.insert(p, n, t)The value_type must be CopyConstructible. The sequences vector and deque also require the value_type to be CopyAssignable.
a.insert(p, i, j)If the iterators return an lvalue the value_type must be CopyConstructible. The sequences vector and deque also require the value_type to be CopyAssignable when the iterators return an lvalue. If the iterators return an rvalue the value_type must be MoveConstructible. The sequences vector and deque also require the value_type to be MoveAssignable when the iterators return an rvalue.
a.erase(p)The sequences vector and deque require the value_type to be MoveAssignable.
a.erase(q1, q2)The sequences vector and deque require the value_type to be MoveAssignable.
a.clear()
a.assign(i, j)If the iterators return an lvalue the value_type must be CopyConstructible and CopyAssignable. If the iterators return an rvalue the value_type must be MoveConstructible and MoveAssignable.
a.assign(n, t)The value_type must be CopyConstructible and CopyAssignable.
a.resize(n)The value_type must be DefaultConstructible. The sequences vector and deque also require the value_type to be MoveConstructible.
a.resize(n, t)The value_type must be CopyConstructible.

Optional Sequence Requirements
a.front()
a.back()
a.push_front(t)The value_type must be CopyConstructible.
a.push_front(rv)The value_type must be MoveConstructible.
a.push_back(t)The value_type must be CopyConstructible.
a.push_back(rv)The value_type must be MoveConstructible.
a.pop_front()
a.pop_back()
a[n]
a.at[n]

Associative Container Requirements
X(i, j)If the iterators return an lvalue the value_type must be CopyConstructible. If the iterators return an rvalue the value_type must be MoveConstructible.
a_uniq.insert(t)The value_type must be CopyConstructible.
a_uniq.insert(rv)The key_type and the mapped_type (if it exists) must be MoveConstructible.
a_eq.insert(t)The value_type must be CopyConstructible.
a_eq.insert(rv)The key_type and the mapped_type (if it exists) must be MoveConstructible.
a.insert(p, t)The value_type must be CopyConstructible.
a.insert(p, rv)The key_type and the mapped_type (if it exists) must be MoveConstructible.
a.insert(i, j)If the iterators return an lvalue the value_type must be CopyConstructible. If the iterators return an rvalue the key_type and the mapped_type (if it exists) must be MoveConstructible..

Unordered Associative Container Requirements
X(i, j, n, hf, eq)If the iterators return an lvalue the value_type must be CopyConstructible. If the iterators return an rvalue the value_type must be MoveConstructible.
a_uniq.insert(t)The value_type must be CopyConstructible.
a_uniq.insert(rv)The key_type and the mapped_type (if it exists) must be MoveConstructible.
a_eq.insert(t)The value_type must be CopyConstructible.
a_eq.insert(rv)The key_type and the mapped_type (if it exists) must be MoveConstructible.
a.insert(p, t)The value_type must be CopyConstructible.
a.insert(p, rv)The key_type and the mapped_type (if it exists) must be MoveConstructible.
a.insert(i, j)If the iterators return an lvalue the value_type must be CopyConstructible. If the iterators return an rvalue the key_type and the mapped_type (if it exists) must be MoveConstructible..

Miscellaneous Requirements
map[lvalue-key]The key_type must be CopyConstructible. The mapped_type must be DefaultConstructible and MoveConstructible.
map[rvalue-key]The key_type must be MoveConstructible. The mapped_type must be DefaultConstructible and MoveConstructible.

Proposed resolution:


705. type-trait decay incompletely specified

Section: 20.4.7 [meta.trans.other] Status: New Submitter: Thorsten Ottosen Date: 2007-07-08

View all issues with New status.

Discussion:

The current working draft has a type-trait decay in 20.4.7 [meta.trans.other].

Its use is to turn C++03 pass-by-value parameters into efficient C++0x pass-by-rvalue-reference parameters. However, the current definition introduces an incompatible change where the cv-qualification of the parameter type is retained. The deduced type should loose such cv-qualification, as pass-by-value does.

Proposed resolution:

In 20.4.7 [meta.trans.other] change the last sentence:

Otherwise the member typedef type equals remove_cv<U>::type.

In 20.3.1.2 [tuple.creation]/1 change:

where each Vi in VTypes is X& if, for the corresponding type Ti in Types, remove_cv<remove_reference<Ti>::type>::type equals reference_wrapper<X>, otherwise Vi is decay<Ti>::type. Let Ui be decay<Ti>::type for each Ti in Types. Then each Vi in VTypes is X& if Ui equals reference_wrapper<X>, otherwise Vi is Ui.


706. make_pair() should behave as make_tuple() wrt. reference_wrapper()

Section: 20.2.3 [pairs] Status: New Submitter: Thorsten Ottosen Date: 2007-07-08

View all other issues in [pairs].

View all issues with New status.

Discussion:

The current draft has make_pair() in 20.2.3 [pairs]/16 and make_tuple() in 20.3.1.2 [tuple.creation]. make_tuple() detects the presence of reference_wrapper<X> arguments and "unwraps" the reference in such cases. make_pair() would OTOH create a reference_wrapper<X> member. I suggest that the two functions are made to behave similar in this respect to minimize confusion.

Proposed resolution:

In 20.2 [utility] change the synopsis for make_pair() to read

template <class T1, class T2>
  pair<typename decay<T1>::type V1, typename decay<T2>::type V2> make_pair(T1&&, T2&&);

In 20.2.3 [pairs]/16 change the declaration to match the above synopsis. Then change the 20.2.3 [pairs]/17 to:

Returns: pair<typename decay<T1>::type V1,typename decay<T2>::type V2>(forward<T1>(x),forward<T2>(y)) where V1 and V2 are determined as follows: Let Ui be decay<Ti>::type for each Ti. Then each Vi is X& if Ui equals reference_wrapper<X>, otherwise Vi is Ui.


707. null pointer constant for exception_ptr

Section: 18.7.1 [exception] Status: New Submitter: Jens Maurer Date: 2007-07-20

View all issues with New status.

Discussion:

From the Toronto Core wiki:

What do you mean by "null pointer constant"? How do you guarantee that exception_ptr() == 1 doesn't work?  Do you even want to prevent that? What's the semantics?  What about void *p = 0; exception_ptr() == p? Maybe disallow those in the interface, but how do you do that with portable C++? Could specify just "make it work".

Peter's response:

null pointer constant as defined in 4.10 [conv.ptr]. Intent is "just make it work", can be implemented as assignment operator taking a unique pointer to member, as in the unspecified bool type idiom.

Proposed resolution:


708. Locales need to be per thread and updated for POSIX changes

Section: 22 [localization] Status: New Submitter: Peter Dimov Date: 2007-07-28

View all other issues in [localization].

View all issues with New status.

Discussion:

The POSIX "Extended API Set Part 4,"

http://www.opengroup.org/sib/details.tpl?id=C065

introduces extensions to the C locale mechanism that allow multiple concurrent locales to be used in the same application by introducing a type locale_t that is very similar to std::locale, and a number of _l functions that make use of it.

The global locale (set by setlocale) is now specified to be per- process. If a thread does not call uselocale, the global locale is in effect for that thread. It can install a per-thread locale by using uselocale.

There is also a nice querylocale mechanism by which one can obtain the name (such as "de_DE") for a specific facet, even for combined locales, with no std::locale equivalent.

std::locale should be harmonized with the new POSIX locale_t mechanism and provide equivalents for uselocale and querylocale.

Proposed resolution: