C++

Topics related to C++:

Getting started with C++

Templates

The word template is a keyword with five different meanings in the C++ language, depending on the context.

  1. When followed by a list of template parameters enclosed in <>, it declares a template such as a class template, a function template, or a partial specialization of an existing template.

    template <class T>
    void increment(T& x) { ++x; }
    
  2. When followed by an empty <>, it declares a an explicit (full) specialization.

    template <class T>
    void print(T x);
    
    template <> // <-- keyword used in this sense here
    void print(const char* s) {
        // output the content of the string
        printf("%s\n", s);
    }
    
  3. When followed by a declaration without <>, it forms an explicit instantiation declaration or definition.

    template <class T>
    std::set<T> make_singleton(T x) { return std::set<T>(x); }
    
    template std::set<int> make_singleton(int x); // <-- keyword used in this sense here
    
  4. Within a template parameter list, it introduces a template template parameter.

    template <class T, template <class U> class Alloc>
    //                 ^^^^^^^^ keyword used in this sense here
    class List {
        struct Node {
            T value;
            Node* next;
        };
        Alloc<Node> allocator;
        Node* allocate_node() {
            return allocator.allocate(sizeof(T));
        }
        // ...
    };
    
  5. After the scope resolution operator :: and the class member access operators . and ->, it specifies that the following name is a template.

    struct Allocator {
        template <class T>
        T* allocate();
    };
    
    template <class T, class Alloc>
    class List {
        struct Node {
            T value;
            Node* next;
        }
        Alloc allocator;
        Node* allocate_node() {
            // return allocator.allocate<Node>();       // error: < and > are interpreted as
                                                        // comparison operators
            return allocator.template allocate<Node>(); // ok; allocate is a template
            //               ^^^^^^^^ keyword used in this sense here
        }
    };
    

Before C++11, a template could be declared with the export keyword, making it into an exported template. An exported template's definition does not need to be present in every translation unit in which the template is instantiated. For example, the following was supposed to work:

foo.h:

#ifndef FOO_H
#define FOO_H
export template <class T> T identity(T x);
#endif

foo.cpp:

#include "foo.h"
template <class T> T identity(T x) { return x; }

main.cpp:

#include "foo.h"
int main() {
    const int x = identity(42); // x is 42
}

Due to difficulty of implementation, the export keyword was not supported by most major compilers. It was removed in C++11; now, it is illegal to use the export keyword at all. Instead, it is usually necessary to define templates in headers (in contrast to non-template functions, which are usually not defined in headers). See Why can templates only be implemented in the header file?

Metaprogramming

Metaprogramming (or more specifically, Template Metaprogramming) is the practice of using templates to create constants, functions, or data structures at compile-time. This allows computations to be performed once at compile time rather than at each run time.

Iterators

Returning several values from a function

std::string

Namespaces

The keyword namespace has three different meanings depending on context:

  1. When followed by an optional name and a brace-enclosed sequence of declarations, it defines a new namespace or extends an existing namespace with those declarations. If the name is omitted, the namespace is an unnamed namespace.

  2. When followed by a name and an equal sign, it declares a namespace alias.

  3. When preceded by using and followed by a namespace name, it forms a using directive, which allows names in the given namespace to be found by unqualified name lookup (but does not redeclare those names in the current scope). A using-directive cannot occur at class scope.

using namespace std; is discouraged. Why? Because namespace std is huge! This means that there is a high chance that names will collide:

//Really bad!
using namespace std;

//Calculates p^e and outputs it to std::cout
void pow(double p, double e) { /*...*/ }

//Calls pow
pow(5.0, 2.0); //Error! There is already a pow function in namespace std with the same signature,
               //so the call is ambiguous

File I/O

Classes/Structures

Note that the only difference between the struct and class keywords is that by default, the member variables, member functions, and base classes of a struct are public, while in a class they are private. C++ programmers tend to call it a class if it has constructors and destructors, and the ability to enforce its own invariants; or a struct if it's just a simple collection of values, but the C++ language itself makes no distinction.

Smart Pointers

C++ is not a memory-managed language. Dynamically allocated memory (i.e. objects created with new) will be "leaked" if it is not explicitly deallocated (with delete). It is the programmer's responsibility to ensure that the dynamically allocated memory is freed before discarding the last pointer to that object.

Smart pointers can be used to automatically manage the scope of dynamically allocated memory (i.e. when the last pointer reference goes out of scope it is deleted).

Smart pointers are preferred over "raw" pointers in most cases. They make the ownership semantics of dynamically allocated memory explicit, by communicating in their names whether an object is intended to be shared or uniquely owned.

Use #include <memory> to be able to use smart pointers.

Function Overloading

Ambiguities can occur when one type can be implicitly converted into more than one type and there is no matching function for that specific type.

For example:

void foo(double, double);
void foo(long, long);

//Call foo with 2 ints
foo(1, 2); //Function call is ambiguous - int can be converted into a double/long at the same time 

std::vector

Use of an std::vector requires the inclusion of the <vector> header using #include <vector>.

Elements in a std::vector are stored contiguously on the free store. It should be noted that when vectors are nested as in std::vector<std::vector<int> >, the elements of each vector are contiguous, but each vector allocates its own underlying buffer on the free store.

Operator Overloading

Lambdas

C++17 (the current draft) introduces constexpr lambdas, basically lambdas that can be evaluated at compile time. A lambda is automatically constexpr if it satisfies constexpr requirements, but you can also specify it using the constexpr keyword:

//Explicitly define this lambdas as constexpr
[]() constexpr {
    //Do stuff
}

Loops

algorithm calls are generally preferable to hand-written loops.

If you want something an algorithm already does (or something very similar), the algorithm call is clearer, often more efficient and less error prone.

If you need a loop that does something fairly simple (but would require a confusing tangle of binders and adapters if you were using an algorithm), then just write the loop.

std::map

  • To use any of std::map or std::multimap the header file <map> should be included.

  • std::map and std::multimap both keep their elements sorted according to the ascending order of keys. In case of std::multimap, no sorting occurs for the values of the same key.

  • The basic difference between std::map and std::multimap is that the std::map one does not allow duplicate values for the same key where std::multimap does.

  • Maps are implemented as binary search trees. So search(), insert(), erase() takes Θ(log n) time in average. For constant time operation use std::unordered_map.

  • size() and empty() functions have Θ(1) time complexity, number of nodes is cached to avoid walking through tree each time these functions are called.

Threading

Some notes:

  • Two std::thread objects can never represent the same thread.
  • A std::thread object can be in a state where it doesn't represent any thread (i.e. after a move, after calling join, etc.).

Value Categories

Preprocessor

Preprocessor statements are executed before your source files are handed to the compiler. They are capable of very low level conditional logic. Since preprocessor constructs (e.g. object-like macros) aren't typed like normal functions (the preprocessing step happens before compilation) the compiler cannot enforce type checks, they should therefore be carefully used.

SFINAE (Substitution Failure Is Not An Error)

The Rule of Three, Five, And Zero

RAII: Resource Acquisition Is Initialization

RAII stands for Resource Acquisition Is Initialization. Also occasionally referred to as SBRM (Scope-Based Resource Management) or RRID (Resource Release Is Destruction), RAII is an idiom used to tie resources to object lifetime. In C++, the destructor for an object always runs when an object goes out of scope - we can take advantage of that to tie resource cleanup into object destruction.

Any time you need to acquire some resource (e.g. a lock, a file handle, an allocated buffer) that you will eventually need to release, you should consider using an object to handle that resource management for you. Stack unwinding will happen regardless of exception or early scope exit, so the resource handler object will clean up the resource for you without you having to carefully consider all possible current and future code paths.

It's worth noting that RAII doesn't completely free the developer of thinking about the lifetime of resources. One case is, obviously, a crash or exit() call, which will prevent destructors from being called. Since the OS will clean up process-local resources like memory after a process ends, this is not a problem in most cases. However with system resources (i.e. named pipes, lock files, shared memory) you still need facilities to deal with the case where a process didn't clean up after itself, i.e. on startup test if the lock file is there, if it is, verify the process with the pid actually exists, then act accordingly.

Another situation is when a unix process calls a function from the exec-family, i.e. after a fork-exec to create a new process. Here, the child process will have a full copy of the parents memory (including the RAII objects) but once exec was called, none of the destructors will be called in that process. On the other hand, if a process is forked and neither of the processes call exec, all resources are cleaned up in both processes. This is correct only for all resources that were actually duplicated in the fork, but with system resources, both processes will only have a reference to the resource (i.e. the path to a lock file) and will both try to release it individually, potentially causing the other process to fail.

Exceptions

Implementation-defined behavior

Special Member Functions

Random number generation

Random number generation in C++ is provided by the <random> header. This header defines random devices, pseudo-random generators and distributions.

Random devices return random numbers provided by operating system. They should either be used for initialization of pseudo-random generators or directly for cryptographic purposes.

Pseudo-random generators return integer pseudo-random numbers based on their initial seed. The pseudo-random number range typically spans all values of an unsigned type. All pseudo-random generators in the standard library will return the same numbers for the same initial seed for all platforms.

Distributions consume random numbers from pseudo-random generators or random devices and produce random numbers with necessary distribution. Distributions are not platform-independent and can produce different numbers for the same generators with the same initial seeds on different platforms.

References

Sorting

The std::sort function family is found in the algorithm library.

Regular expressions

Polymorphism

Perfect Forwarding

Perfect forwarding requires forwarding references in order to preserve the ref-qualifiers of the arguments. Such references appear only in a deduced context. That is:

template<class T>
void f(T&& x) // x is a forwarding reference, because T is deduced from a call to f()
{
    g(std::forward<T>(x)); // g() will receive an lvalue or an rvalue, depending on x
}

The following does not involve perfect forwarding, because T is not deduced from the constructor call:

template<class T>
struct a
{
    a(T&& x); // x is a rvalue reference, not a forwarding reference
};
C++17

C++17 will allow deduction of class template arguments. The constructor of "a" in the above example will become a user of a forwarding reference

a example1(1);
  // same as a<int> example1(1);

int x = 1;
a example2(x);
  // same as a<int&> example2(x);

Virtual Member Functions

Undefined Behavior

If a program contains undefined behavior, the C++ standard places no constraints on its behavior.

  • It may appear to work as the developer intended, but it may also crash or produce strange results.
  • The behavior may vary between runs of the same program.
  • Any part of the program may malfunction, including lines that come before the line that contains undefined behavior.
  • The implementation is not required to document the result of undefined behavior.

An implementation may document the result of an operation that produces undefined behavior according to the standard, but a program that depends on such documented behavior is not portable.

Why undefined behavior exists

Intuitively, undefined behavior is considered a bad thing as such errors can't be handled graciously through, say, exception handlers.

But leaving some behavior undefined is actually an integral part of C++'s promise "you don't pay for what you don't use". Undefined behavior allows a compiler to assume the developer knows what he's doing and not introduce code to check for the mistakes highlighted in the above examples.

Finding and avoiding undefined behavior

Some tools can be used to discover undefined behavior during development:

  • Most compilers have warning flags to warn about some cases of undefined behavior at compile time.
  • Newer versions of gcc and clang include a so-called "Undefined Behavior Sanitizer" flag (-fsanitize=undefined) that will check for undefined behavior at runtime, at a performance cost.
  • lint-like tools may perform more thorough undefined behavior analysis.

Undefined, unspecified and implementation-defined behavior

From C++14 standard (ISO/IEC 14882:2014) section 1.9 (Program Execution):

  1. The semantic descriptions in this International Standard define a parameterized nondeterministic abstract machine. [CUT]

  2. Certain aspects and operations of the abstract machine are described in this International Standard as implementation-defined (for example, sizeof(int)). These constitute the parameters of the abstract machine. Each implementation shall include documentation describing its characteristics and behavior in these respects. [CUT]

  3. Certain other aspects and operations of the abstract machine are described in this International Standard as unspecified (for example, evaluation of expressions in a new-initializer if the allocation function fails to allocate memory). Where possible, this International Standard defines a set of allowable behaviors. These define the nondeterministic aspects of the abstract machine. An instance of the abstract machine can thus have more than one possible execution for a given program and a given input.

  4. Certain other operations are described in this International Standard as undefined (or example, the effect of attempting to modify a const object). [ Note: this International Standard imposes no requirements on the behavior of programs that contain undefined behavior. —end note ]

Value and Reference Semantics

Overload resolution

Overload resolution happens in several different situations

  • Calls to named overloaded functions. The candidates are all the functions found by name lookup.
  • Calls to class object. The candidates are usually all the overloaded function call operators of the class.
  • Use of an operator. The candidates are the overloaded operator functions at namespace scope, the overloaded operator functions in the left class object (if any) and the built-in operators.
  • Overload resolution to find the correct conversion operator function or constructor to invoke for an initialization
    • For non-list direct initialization (Class c(value)), the candidates are constructors of Class.
    • For non-list copy initialization (Class c = value) and for finding the user defined conversion function to invoke in a user defined conversion sequence. The candidates are the constructors of Class and if the source is a class object, its conversion operator functions.
    • For initialization of a non-class from a class object (Nonclass c = classObject). The candidates are the conversion operator functions of the initializer object.
    • For initializing a reference with a class object (R &r = classObject), when the class has conversion operator functions that yield values that can be bound directly to r. The candidates are such conversion operator functions.
    • For list-initialization of a non-aggregate class object (Class c{1, 2, 3}), the candidates are the initializer list constructors for a first pass through overload resolution. If this doesn't find a viable candidate, a second pass through overload resolution is done, with the constructors of Class as candidates.

Move Semantics

Pointers to members

Pimpl Idiom

The pimpl idiom (pointer to implementation, sometimes referred to as opaque pointer or cheshire cat technique), reduces the compilation times of a class by moving all its private data members into a struct defined in the .cpp file.

The class owns a pointer to the implementation. This way, it can be forward declared, so that the header file does not need to #include classes that are used in private member variables.

When using the pimpl idiom, changing a private data member does not require recompiling classes that depend on it.

std::function: To wrap any element that is callable

const keyword

A variable marked as const cannot1 be changed. Attempting to call any non-const operations on it will result in a compiler error.

1: Well, it can be changed through const_cast, but you should almost never use that

auto

The keyword auto is a typename that represents an automatically-deduced type.

It was already a reserved keyword in C++98, inherited from C. In old versions of C++, it could be used to explicitly state that a variable has automatic storage duration:

int main()
{
  auto int i = 5; // removing auto has no effect
}

That old meaning is now removed.

std::optional

Copy Elision

Bit Operators

Bit shift operations are not portable across all processor architectures, different processors can have different bit-widths. In other words, if you wrote

int a = ~0;
int b = a << 1;

This value would be different on a 64 bit machine vs. on a 32 bit machine, or from an x86 based processor to a PIC based processor.

Endian-ness does not need to be taken into account for the bit wise operations themselves, that is, the right shift (>>) will shift the bits towards the least significant bit and an XOR will perform an exclusive or on the bits. Endian-ness only needs to be taken into account with the data itself, that is, if endian-ness is a concern for your application, it's a concern regardless of bit wise operations.

Fold Expressions

Fold Expressions are supported for the following operators

            
+-*/%&|<<>>
+=-=*=/=%=\ˆ=&=|=<<=>>==
==!=<><=>=&&||,.*->*

When folding over an empty sequence, a fold expression is ill-formed, except for the following three operators:

OperatorValue when parameter pack is empty
&&true
||false
,void()

Unions

Unions are very useful tools, but come with a few important caveats:

Unnamed types

mutable keyword

Bit fields

How expensive are the bitwise operations? Suppose a simple non-bit field structure:

struct foo {
    unsigned x;
    unsigned y;
}
static struct foo my_var;

In some later code:

my_var.y = 5;

If sizeof (unsigned) == 4, then x is stored at the start of the structure, and y is stored 4 bytes in. Assembly code generated may resemble:

loada register1,#myvar     ; get the address of the structure
storei register1[4],#0x05  ; put the value '5' at offset 4, e.g., set y=5

This is straightforward because x is not co-mingled with y. But imagine redefining the structure with bit fields:

struct foo {
    unsigned x : 4; /* Range 0-0x0f, or 0 through 15 */
    unsigned y : 4;
}

Both x and y will be allocated 4 bits, sharing a single byte. The structure thus takes up 1 byte, instead of 8. Consider the assembly to set y now, assuming it ends up in the upper nibble:

loada  register1,#myvar        ; get the address of the structure
loadb  register2,register1[0]  ; get the byte from memory
andb   register2,#0x0f         ; zero out y in the byte, leaving x alone
orb    register2,#0x50         ; put the 5 into the 'y' portion of the byte
stb    register1[0],register2  ; put the modified byte back into memory

This may be a good trade-off if we have thousands or millions of these structures, and it helps keeps memory in cache or prevents swapping—or could bloat the executable to worsen these problems and slow processing. As with all things, use good judgement.

Device driver use: Avoid bit fields as a clever implementation strategy for device drivers. Bit field storage layouts are not necessarily consistent between compilers, making such implementations non-portable. The read-modify-write to set values may not do what devices expect, causing unexpected behaviors.

std::array

Use of a std::array requires the inclusion of the <array> header using #include <array>.

Singleton Design Pattern

A Singleton is designed to ensure a class only has one instance and provides a global point of access to it. If you only require one instance or a convenient global point of access, but not both, consider other options before turning to the singleton.

Global variables can make it harder to reason about code. For example, if one of the calling functions isn't happy with the data it's receiving from a Singleton, you now have to track down what is first giving the singleton bad data in the first place.

Singletons also encourage coupling, a term used to describe two components of code that are joined together thus reducing each components own measure of self-containment.

Singletons aren't concurrency friendly. When a class has a global point of access, every thread has the ability to access it which can lead to deadlocks and race conditions.

Lastly, lazy initialization can cause performance issues if initialized at the wrong time. Removing lazy initialization also removes some of the features that do make Singleton's interesting in the first place, such as polymorphism (see Subclasses).

Sources: Game Programming Patterns by Robert Nystrom

The ISO C++ Standard

When C++ is mentioned, often a reference is made to "the Standard". But what is exactly that standard?

C++ has a long history. Started as a small project by Bjarne Stroustrup within Bell Labs, by the early 90's it had become quite popular. Multiple companies were creating C++ compilers so that users could run their C++ compilers on a wide range of computers. But in order to facilitate this, all these competing compilers should share a single definition of the language.

At that point, the C language had successfully been standardized. This means that a formal description of the language was written. This was submitted to the American National Standards Institute (ANSI), which opened up the document for review and subsequently published it in 1989. One year later, the International Organization for Standards (Because it would have different acronyms in different languages they chose one form, ISO, derived from the Greek isos, meaning equal.) adopted the American standard as an International Standard.

For C++, it was clear from the beginning that there was an international interest. A workgroup within ISO was started (known as WG21, within SubCommittee 22). This workgroup drafted a first standard around 1995. But as we programmers know, there's nothing more dangerous to a planned delivery than last minute features, and that happened to C++ as well. In 1995, a cool new library named the STL surfaced, and the people working in WG21 decided to add a slimmed-down version to the C++ draft standard. Naturally, this caused the deadlines to be missed and only 3 years later did the document become final. ISO is a very formal organization, so the C++ Standard was christened with the not very marketable name of ISO/IEC 14882. As standards can be updated, this exact version became known as 14882:1998.

And indeed there was a demand to update the Standard. The Standard is a very thick document, which aims to exactly describe how C++ compilers should work. Even a slight ambiguity can be worth fixing, so by 2003 an update was released as 14882:2003. However, this did not add any feature to C++; the new features were scheduled for the second update.

Informally, this second update was known as C++0x, because it wasn't known whether this would take until 2008 or 2009. Well - that version also got a slight delay, which is why it became 14882:2011.

Luckily, WG21 decided not to let that happen again. C++11 was well-received and let to a renewed interest in C++. So, to keep the momentum, the third update went from planning to publishing in 3 years, to become 14882:2014.

The work didn't stop there, either. The C++17 standard has been proposed and the work for C++20 has been started.

User-Defined Literals

Enumeration

Type Erasure

Memory management

A leading :: forces the new or delete operator to be looked up in global scope, overriding any overloaded class-specific new or delete operators.

The optional arguments following the new keyword are usually used to call placement new, but can also be used to pass additional information to the allocator, such as a tag requesting that memory be allocated from a chosen pool.

The type allocated is usually explicitly specified, e.g., new Foo, but can also be written as auto (since C++11) or decltype(auto) (since C++14) to deduce it from the initializer.

Initialization of the object allocated occurs according to the same rules as initialization of local variables. In particular, the object will be default-initialized if the initializer iso omitted, and when dynamically allocating a scalar type or an array of scalar type, there is no guarantee that the memory will be zeroed out.

An array object created by a new-expression must be destroyed using delete[], regardless of whether the new-expression was written with [] or not. For example:

using IA = int[4];
int* pIA = new IA;
delete[] pIA;  // right
// delete pIA;  // wrong

Bit Manipulation

In order to use std::bitset you will have to include <bitset> header.

#include <bitset>

std::bitset overloads all of the operator functions to allow the same usage as the c-style handling of bitsets.


References

Arrays

Pointers

Be aware of problems when declaring multiple pointers on the same line.

int* a, b, c; //Only a is a pointer, the others are regular ints.

int* a, *b, *c; //These are three pointers!

int *foo[2]; //Both *foo[0] and *foo[1] are pointers.

Explicit type conversions

All six cast notations have one thing in common:

  • Casting to an lvalue reference type, as in dynamic_cast<Derived&>(base), yields an lvalue. Therefore, when you want to do something with the same object but treat it as a different type, you would cast to an lvalue reference type.
  • Casting to an rvalue reference type, as in static_cast<string&&>(s), yields an rvalue.
  • Casting to a non-reference type, as in (int)x, yields a prvalue, which may be thought of as a copy of the value being cast, but with a different type from the original.

The reinterpret_cast keyword is responsible for performing two different kinds of "unsafe" conversions:

The static_cast keyword can perform a variety of different conversions:

  • Base to derived conversions

  • Any conversion that can be done by a direct initialization, including both implicit conversions and conversions that call an explicit constructor or conversion function. See here and here for more details.

  • To void, which discards the value of the expression.

    // on some compilers, suppresses warning about x being unused
    static_cast<void>(x);
    
  • Between arithmetic and enumeration types, and between different enumeration types. See enum conversions

  • From pointer to member of derived class, to pointer to member of base class. The types pointed to must match. See derived to base conversion for pointers to members

  • void* to T*.

C++11

RTTI: Run-Time Type Information

Standard Library Algorithms

Friend keyword

Expression templates

Scopes

Atomic Types

std::atomic allows atomic access to a TriviallyCopyable type, it is implementation-dependent if this is done via atomic operations or by using locks. The only guaranteed lock-free atomic type is std::atomic_flag.

static_assert

Unlike runtime assertions, static assertions are checked at compile-time and are also enforced when compiling optimized builds.

operator precedence

Operators are listed top to bottom, in descending precedence. Operators with the same number have equal precedence and the same associativity.

  1. ::
  2. The postfix operators: [] () T(...) . -> ++ -- dynamic_cast static_cast reinterpret_cast const_cast typeid
  3. The unary prefix operators: ++ -- * & + - ! ~ sizeof new delete delete[]; the C-style cast notation, (T)...; (C++11 and above) sizeof... alignof noexcept
  4. .* and ->*
  5. *, /, and %, binary arithmetic operators
  6. + and -, binary arithmetic operators
  7. << and >>
  8. <, >, <=, >=
  9. == and !=
  10. &, the bitwise AND operator
  11. ^
  12. |
  13. &&
  14. ||
  15. ?: (ternary conditional operator)
  16. =, *=, /=, %=, +=, -=, >>=, <<=, &=, ^=, |=
  17. throw
  18. , (the comma operator)

The assignment, compound assignment, and ternary conditional operators are right-associative. All other binary operators are left-associative.

The rules for the ternary conditional operator are a bit more complicated than simple precedence rules can express.

  • An operand binds less tightly to a ? on its left or a : on its right than to any other operator. Effectively, the second operand of the conditional operator is parsed as though it is parenthesized. This allows an expression such as a ? b , c : d to be syntactically valid.
  • An operand binds more tightly to a ? on its right than to an assignment operator or throw on its left, so a = b ? c : d is equivalent to a = (b ? c : d) and throw a ? b : c is equivalent to throw (a ? b : c).
  • An operand binds more tightly to an assignment operator on its right than to : on its left, so a ? b : c = d is equivalent to a ? b : (c = d).

constexpr

The constexpr keyword was added in C++11 but for a few years since the C++11 standard was published, not all major compilers supported it. at the time that the C++11 standard was published. As of the time of publication of C++14, all major compilers support constexpr.

Date and time using <chrono> header

Trailing return type

The above syntax shows a full function declaration using a trailing type, where square brackets indicate an optional part of the function declaration (like the argument list if a no-arg function).

Additionally, the syntax of the trailing return type prohibits defining a class, union, or enum type inside a trailing return type (note that this is not allowed in a leading return type either). Other than that, types can be spelled the same way after the -> as they would be elsewhere.

Function Template Overloading

  • A normal function is never related to a function template, despite same name, same type.
  • A normal function call and a generated function template call are different even if they share the same name, same return type and same argument list

Common compile/linker errors (GCC)

Design pattern implementation in C++

A design pattern is a general reusable solution to a commonly occurring problem within a given context in software design.

Optimization in C++

Compiling and Building

Most operating systems ship without a compiler, and they have to be installed later. Some common compilers choices are:

Please consult the appropriate compiler manual, on how to compile a C++ program.

Another option to use a specific compiler with its own specific build system, it is possible to let generic build systems configure the project for a specific compiler or for the default installed one.

Type Traits

Type traits are templated constructs used to compare and test the properties of different types at compile time. They can be used to provide conditional logic at compile time that can limit or extend the functionality of your code in a specific manner. The type traits library was brought in with the c++11 standard which provides a number different functionalities. It is also possible to create your own type trait comparison templates.

std::pair

Keywords

The full list of keywords is as follows:

The tokens final and override are not keywords. They may be used as identifiers and have special meaning only in certain contexts.

The tokens and, and_eq, bitand, bitor, compl, not, not_eq, or, or_eq, xor, and xor_eq are alternative spellings of &&, &=, &, |, ~, !, !=, ||, |=, ^, and ^=, respectively. The standard does not treat them as keywords, but they are keywords for all intents and purposes, since it is impossible to redefine them or use them to mean anything other than the operators they represent.

The following topics contain detailed explanations of many of the keywords in C++, which serve fundamental purposes such as naming basic types or controlling the flow of execution.

One Definition Rule (ODR)

Unspecified behavior

Floating Point Arithmetic

Argument Dependent Name Lookup

std::variant

Variant is a replacement for raw union use. It is type-safe and knows what type it is, and it carefully constructs and destroys the objects within it when it should.

It is almost never empty: only in corner cases where replacing its content throws and it cannot back out safely does it end up being in an empty state.

It behaves somewhat like a std::tuple, and somewhat like an std::optional.

Using std::get and std::get_if is usually a bad idea. The right answer is usually std::visit, which lets you deal with every possibility right there. if constexpr can be used within the visit if you need to branch your behavior, rather than doing a sequence of runtime checks that duplicate what visit will do more efficiently.

Attributes

Internationalization in C++

The C++ language does not dictate any character-set, some compilers may support the use of UTF-8, or even UTF-16. However there is no certainty that anything beyond simple ANSI/ASCII characters will be provided.

Thus all international language support is implementation defined, reliant on what platform, operating system, and compiler you may be using.

Several third party libraries (such as the International Unicode Committee Library) that can be used to extend the international support of the platform.

Profiling

Return Type Covariance

Non-Static Member Functions

A non-static member function is a class/struct/union member function, which is called on a particular instance, and operates on said instance. Unlike static member functions, it cannot be called without specifying an instance.

For information on classes, structures, and unions, please see the parent topic.

Recursion in C++

Callable Objects

std::iomanip

Constant class member functions

What does 'const member functions' of a class really means. The simple definition seems to be that, a const member function cannot change the object. But what does 'can not change' really means here. It simply means that you cannot do an assignment for class data members.

However, you can do other indirect operations like inserting an entry into a map as shown in the example. Allowing this might look like this const function is modifying the object (yes, it does in one sense), but it is allowed.

So, the real meaning is that a const member function cannot do an assignment for the class data variables. But it can do other stuff like explained in the example.

Side by Side Comparisons of classic C++ examples solved via C++ vs C++11 vs C++14 vs C++17

The This Pointer

Inline functions

Usually if code generated for a function is sufficiently small then it's a good candidate to be inlined. Why so? If a function is large and is inlined in a loop, for all the calls made, the large function's code would be duplicated leading to the generated binary size bloat. But, how small is sufficient?

While inline functions seem to be great way to avoid function calling overhead, it's to be noted that not all functions that are marked inline are inlined. In other words, when you say inline, it is only a hint to the compiler, not an order: the compiler isn't obliged to inline the function, it's free to ignore it - most of them do. Modern compilers are better at making such optimisations that this keyword is now a vestige of the past, when this suggestion of function inlining by the programmer was taken seriously by the compilers. Even functions not marked inline are inlined by the compiler when it sees benefit in doing so.

Inline as a linkage directive

The more practical use of inline in modern C++ comes from using it as a linkage directive. When defining, not declaring, a function in a header which is going to be included in multiple sources, then each translation unit will have its own copy of this function leading to a ODR (One Definition Rule) violation; this rule roughly says that there can be only one definition of a function, variable, etc. To circumvent this violation, marking the function definition inline implicitly makes the function linkage internal.

FAQs

When should I write the keyword 'inline' for a function/method in C++?

Only when you want the function to be defined in a header. More exactly only when the function's definition can show up in multiple compilation units. It's a good idea to define small (as in one liner) functions in the header file as it gives the compiler more information to work with while optimizing your code. It also increases compilation time.

When should I not write the keyword 'inline' for a function/method in C++?

Don't add inline when you think your code will run faster if the compiler inlines it.

When will the the compiler not know when to make a function/method inline?

Generally, the compiler will be able to do this better than you. However, the compiler doesn't have the option to inline code if it doesn't have the function definition. In maximally optimized code usually all private methods are inlined whether you ask for it or not.

See Also

Copying vs Assignment

Client server examples

Header Files

In C++, as in C, the C++ compiler and compilation process makes use of the C preprocessor. As specified by the GNU C Preprocessor manual, a header file is defined as the following:

A header file is a file containing C declarations and macro definitions (see Macros) to be shared between several source files. You request the use of a header file in your program by including it, with the C preprocessing directive ‘#include’.

Header files serve two purposes.

  • System header files declare the interfaces to parts of the operating system. You include them in your program to supply the definitions and declarations you need to invoke system calls and libraries.
  • Your own header files contain declarations for interfaces between the source files of your program. Each time you have a group of related declarations and macro definitions all or most of which are needed in several different source files, it is a good idea to create a header file for them.

However, to the C preprocessor itself, a header file is no different than a source file.

The header/source file organization scheme is simply a strongly-held and standard convention set by various software projects in order to provide separation between interface and implementation.

Although it is not formally enforced by the C++ Standard itself, following the header/source file convention is highly recommended, and, in practice, is already almost ubiquitous.

Note that header files may be replaced as a project file structure convention by the upcoming feature of modules, which is still to be considered for inclusion in a future C++ Standard as of the time of writing (e.g. C++20).

Const Correctness

std::atomics

Data Structures in C++

Refactoring Techniques

C++ Streams

Default constructor of std::istream_iterator constructs an iterator which represents the end of the stream. Thus, std::copy(std::istream_iterator<int>(ifs), std::istream_iterator<int>(), .... means to copy from the current position in ifs to the end.

Parameter packs

Literals

Flow Control

Check out the loops topic for the different kind of loops.

Type Keywords

Basic Type Keywords

Variable Declaration Keywords

Iteration

type deduction

In November 2014, the C++ Standardization Committee adopted proposal N3922, which eliminates the special type deduction rule for auto and braced initializers using direct initialization syntax. This is not part of the C++ standard but has been implemented by some compilers.

std::any

The class std::any provides a type-safe container to which we can put single values of any type.

C++11 Memory Model

Different threads trying to access the same memory location participate in a data race if at least one of the operations is a modification (also known as store operation). These data races cause undefined behavior. To avoid them one needs to prevent these threads from concurrently executing such conflicting operations.

Synchronization primitives (mutex, critical section and the like) can guard such accesses. The Memory Model introduced in C++11 defines two new portable ways to synchronize access to memory in multi-threaded environment: atomic operations and fences.

Atomic Operations

It is now possible to read and write to given memory location by the use of atomic load and atomic store operations. For convenience these are wrapped in the std::atomic<t> template class. This class wraps a value of type t but this time loads and stores to the object are atomic.

The template is not available for all types. Which types are available is implementation specific, but this usually includes most (or all) available integral types as well as pointer types. So that std::atomic<unsigned> and std::atomic<std::vector<foo> *> should be available, while std::atomic<std::pair<bool,char>> most probably wont be.

Atomic operations have the following properties:

  • All atomic operations can be performed concurrently from multiple threads without causing undefined behavior.
  • An atomic load will see either the initial value which the atomic object was constructed with, or the value written to it via some atomic store operation.
  • Atomic stores to the same atomic object are ordered the same in all threads. If a thread has already seen the value of some atomic store operation, subsequent atomic load operations will see either the same value, or the value stored by subsequent atomic store operation.
  • Atomic read-modify-write operations allow atomic load and atomic store to happen without other atomic store in between. For example one can atomically increment a counter from multiple threads, and no increment will be lost regardless of the contention between the threads.
  • Atomic operations receive an optional std::memory_order parameter which defines what additional properties the operation has regarding other memory locations.
std::memory_orderMeaning
std::memory_order_relaxedno additional restrictions
std::memory_order_releasestd::memory_order_acquireif load-acquire sees the value stored by store-release then stores sequenced before the store-release happen before loads sequenced after the load acquire
std::memory_order_consumelike memory_order_acquire but only for dependent loads
std::memory_order_acq_relcombines load-acquire and store-release
std::memory_order_seq_cstsequential consistency

These memory order tags allow three different memory ordering disciplines: sequential consistency, relaxed, and release-acquire with its sibling release-consume.

Sequential Consistency

If no memory order is specified for an atomic operation, the order defaults to sequential consistency. This mode can also be explicitly selected by tagging the operation with std::memory_order_seq_cst.

With this order no memory operation can cross the atomic operation. All memory operations sequenced before the atomic operation happen before the atomic operation and the atomic operation happens before all memory operations that are sequenced after it. This mode is probably the easiest one to reason about but it also leads to the greatest penalty to performance. It also prevents all compiler optimizations that might otherwise try to reorder operations past the atomic operation.

Relaxed Ordering

The opposite to sequential consistency is the relaxed memory ordering. It is selected with the std::memory_order_relaxed tag. Relaxed atomic operation will impose no restrictions on other memory operations. The only effect that remains, is that the operation is itself still atomic.

Release-Acquire Ordering

An atomic store operation can be tagged with std::memory_order_release and an atomic load operation can be tagged with std::memory_order_acquire. The first operation is called (atomic) store-release while the second is called (atomic) load-acquire.

When load-acquire sees the value written by a store-release the following happens: all store operations sequenced before the store-release become visible to (happen before) load operations that are sequenced after the load-acquire.

Atomic read-modify-write operations can also receive the cumulative tag std::memory_order_acq_rel. This makes the atomic load portion of the operation an atomic load-acquire while the atomic store portion becomes atomic store-release.

The compiler is not allowed to move store operations after an atomic store-release operation. It is also not allowed to move load operations before atomic load-acquire (or load-consume).

Also note that there is no atomic load-release or atomic store-acquire. Attempting to create such operations makes them relaxed operations.

Release-Consume Ordering

This combination is similar to release-acquire, but this time the atomic load is tagged with std::memory_order_consume and becomes (atomic) load-consume operation. This mode is the same as release-acquire with the only difference that among the load operations sequenced after the load-consume only these depending on the value loaded by the load-consume are ordered.

Fences

Fences also allow memory operations to be ordered between threads. A fence is either a release fence or acquire fence.

If a release fence happens before an acquire fence, then stores sequenced before the release fence are visible to loads sequenced after the acquire fence. To guarantee that the release fence happens before the acquire fence one may use other synchronization primitives including relaxed atomic operations.

Build Systems

Currently, there exists no universal or dominant build system for C++ that is both popular and cross-platform. However, there do exist several major build systems that are attached to major platforms/projects, the most notable being GNU Make with the GNU/Linux operating system and NMAKE with the Visual C++/Visual Studio project system.

Additionally, some Integrated Development Environments (IDEs) also include specialized build systems to be used specifically with the native IDE. Certain build system generators can generate these native IDE build system/project formats, such as CMake for Eclipse and Microsoft Visual Studio 2012.

Concurrency With OpenMP

OpenMP does not require any special headers or libraries as it is a built-in compiler feature. However, if you use any OpenMP API functions such as omp_get_thread_num(), you will need to include omp.h and its library.

OpenMP pragma statements are ignored when the OpenMP option is not enabled during compilation. You may want to refer to the compiler option in your compiler's manual.

  • GCC uses -fopenmp
  • Clang uses -fopenmp
  • MSVC uses /openmp

Type Inference

It is usually better to declare const, & and constexpr whenever you use auto if it is ever required to prevent unwanted behaviors such as copying or mutations. Those additional hints ensures that the compiler does not generate any other forms of inference. It is also not advisible to over use auto and should be used only when the actual declaration is very long, especially with STL templates.

std::integer_sequence

Resource Management

std::set and std::multiset

Different styles of C++ have been used in those examples.  Be careful that if you are using a C++98 compiler; some of this code may not be usable.

Storage class specifiers

There are six storage class specifiers, although not all in the same version of the language: auto (until C++11), register (until C++17), static, thread_local (since C++11), extern, and mutable.

According to the standard,

At most one storage-class-specifier shall appear in a given decl-specifier-seq, except that thread_local may appear with static or extern.

A declaration may contain no storage class specifier. In that case, the language specifies a default behaviour. For example, by default, a variable declared at block scope implicitly has automatic storage duration.

Alignment

The standard guarantees the following:

  • The alignment requirement of a type is a divisor of its size. For example, a class with size 16 bytes could have an alignment of 1, 2, 4, 8, or 16, but not 32. (If a class's members only total 14 bytes in size, but the class needs to have an alignment requirement of 8, the compiler will insert 2 padding bytes to make the class's size equal to 16.)
  • The signed and unsigned versions of an integer type have the same alignment requirement.
  • A pointer to void has the same alignment requirement as a pointer to char.
  • The cv-qualified and cv-unqualified versions of a type have the same alignment requirement.

Note that while alignment exists in C++03, it was not until C++11 that it became possible to query alignment (using alignof) and control alignment (using alignas).

Inline variables

Linkage specifications

Curiously Recurring Template Pattern (CRTP)

Using declaration

A using-declaration is distinct from a using directive, which tells the compiler to look in a particular namespace when looking up any name. A using-directive begins with using namespace.

A using-declaration is also distinct from an alias declaration, which gives a new name to an existing type in the same manner as typedef. An alias declaration contains an equals sign.

Typedef and type aliases

Layout of object types

C incompatibilities

std::forward_list

Adding, removing and moving the elements within the list, or across several lists, does not invalidate the iterators currently referring to other elements in the list. However, an iterator or reference referring to an element is invalidated when the corresponding element is removed (via erase_after) from the list. std::forward_list meets the requirements of Container (except for the size member function and that operator=='s complexity is always linear), AllocatorAwareContainer and SequenceContainer.

Optimization

Semaphore

Thread synchronization structures

C++ Debugging and Debug-prevention Tools & Techniques

This topic ain't complete yet, examples on following techniques/tools would be useful:

  • Mention more static analysis tools
  • Binary instrumentation tools (like UBSan, TSan, MSan, ESan ...)
  • Hardening (CFI ...)
  • Fuzzing

Futures and Promises

More undefined behaviors in C++

Mutexes

It is better to use std::shared_mutex than std::shared_timed_mutex.

The performance difference is more than double.

If you want to use RWLock, you will find that there are two options.
It is std::shared_mutex and shared_timed_mutex.
you may think std::shared_timed_mutex is just the version 'std::shared_mutex + time method'.

But the implementation is totally different.

The code below is MSVC14.1 implementation of std::shared_mutex.

class shared_mutex
{
public: 
typedef _Smtx_t * native_handle_type;

shared_mutex() _NOEXCEPT
    : _Myhandle(0)
    {    // default construct
    }

~shared_mutex() _NOEXCEPT
    {    // destroy the object
    }

void lock() _NOEXCEPT
    {    // lock exclusive
    _Smtx_lock_exclusive(&_Myhandle);
    }

bool try_lock() _NOEXCEPT
    {    // try to lock exclusive
    return (_Smtx_try_lock_exclusive(&_Myhandle) != 0);
    }

void unlock() _NOEXCEPT
    {    // unlock exclusive
    _Smtx_unlock_exclusive(&_Myhandle);
    }

void lock_shared() _NOEXCEPT
    {    // lock non-exclusive
    _Smtx_lock_shared(&_Myhandle);
    }

bool try_lock_shared() _NOEXCEPT
    {    // try to lock non-exclusive
    return (_Smtx_try_lock_shared(&_Myhandle) != 0);
    }

void unlock_shared() _NOEXCEPT
    {    // unlock non-exclusive
    _Smtx_unlock_shared(&_Myhandle);
    }

native_handle_type native_handle() _NOEXCEPT
    {    // get native handle
    return (&_Myhandle);
    }

shared_mutex(const shared_mutex&) = delete;
shared_mutex& operator=(const shared_mutex&) = delete;
private: 
    _Smtx_t _Myhandle;
};

void __cdecl _Smtx_lock_exclusive(_Smtx_t * smtx)
{    /* lock shared mutex exclusively */
AcquireSRWLockExclusive(reinterpret_cast<PSRWLOCK>(smtx));
}

void __cdecl _Smtx_lock_shared(_Smtx_t * smtx)
{    /* lock shared mutex non-exclusively */
AcquireSRWLockShared(reinterpret_cast<PSRWLOCK>(smtx));
}

int __cdecl _Smtx_try_lock_exclusive(_Smtx_t * smtx)
{    /* try to lock shared mutex exclusively */
return (TryAcquireSRWLockExclusive(reinterpret_cast<PSRWLOCK>(smtx)));
}

int __cdecl _Smtx_try_lock_shared(_Smtx_t * smtx)
{    /* try to lock shared mutex non-exclusively */
return (TryAcquireSRWLockShared(reinterpret_cast<PSRWLOCK>(smtx)));
}

void __cdecl _Smtx_unlock_exclusive(_Smtx_t * smtx)
{    /* unlock exclusive shared mutex */
ReleaseSRWLockExclusive(reinterpret_cast<PSRWLOCK>(smtx));
}

void __cdecl _Smtx_unlock_shared(_Smtx_t * smtx)
{    /* unlock non-exclusive shared mutex */
ReleaseSRWLockShared(reinterpret_cast<PSRWLOCK>(smtx));
}

You can see that std::shared_mutex is implemented in Windows Slim Reader/Write Locks(https://msdn.microsoft.com/ko-kr/library/windows/desktop/aa904937(v=vs.85).aspx)

Now Let's look at the implementation of std::shared_timed_mutex.

The code below is MSVC14.1 implementation of std::shared_timed_mutex.

class shared_timed_mutex
{
typedef unsigned int _Read_cnt_t;
static constexpr _Read_cnt_t _Max_readers = _Read_cnt_t(-1);
public:
shared_timed_mutex() _NOEXCEPT
    : _Mymtx(), _Read_queue(), _Write_queue(),
        _Readers(0), _Writing(false)
    {    // default construct
    }

~shared_timed_mutex() _NOEXCEPT
    {    // destroy the object
    }

void lock()
    {    // lock exclusive
    unique_lock<mutex> _Lock(_Mymtx);
    while (_Writing)
        _Write_queue.wait(_Lock);
    _Writing = true;
    while (0 < _Readers)
        _Read_queue.wait(_Lock);    // wait for writing, no readers
    }

bool try_lock()
    {    // try to lock exclusive
    lock_guard<mutex> _Lock(_Mymtx);
    if (_Writing || 0 < _Readers)
        return (false);
    else
        {    // set writing, no readers
        _Writing = true;
        return (true);
        }
    }

template<class _Rep,
    class _Period>
    bool try_lock_for(
        const chrono::duration<_Rep, _Period>& _Rel_time)
    {    // try to lock for duration
    return (try_lock_until(chrono::steady_clock::now() + _Rel_time));
    }

template<class _Clock,
    class _Duration>
    bool try_lock_until(
        const chrono::time_point<_Clock, _Duration>& _Abs_time)
    {    // try to lock until time point
    auto _Not_writing = [this] { return (!_Writing); };
    auto _Zero_readers = [this] { return (_Readers == 0); };
    unique_lock<mutex> _Lock(_Mymtx);

    if (!_Write_queue.wait_until(_Lock, _Abs_time, _Not_writing))
        return (false);

    _Writing = true;

    if (!_Read_queue.wait_until(_Lock, _Abs_time, _Zero_readers))
        {    // timeout, leave writing state
        _Writing = false;
        _Lock.unlock();    // unlock before notifying, for efficiency
        _Write_queue.notify_all();
        return (false);
        }

    return (true);
    }

void unlock()
    {    // unlock exclusive
        {    // unlock before notifying, for efficiency
        lock_guard<mutex> _Lock(_Mymtx);

        _Writing = false;
        }

    _Write_queue.notify_all();
    }

void lock_shared()
    {    // lock non-exclusive
    unique_lock<mutex> _Lock(_Mymtx);
    while (_Writing || _Readers == _Max_readers)
        _Write_queue.wait(_Lock);
    ++_Readers;
    }

bool try_lock_shared()
    {    // try to lock non-exclusive
    lock_guard<mutex> _Lock(_Mymtx);
    if (_Writing || _Readers == _Max_readers)
        return (false);
    else
        {    // count another reader
        ++_Readers;
        return (true);
        }
    }

template<class _Rep,
    class _Period>
    bool try_lock_shared_for(
        const chrono::duration<_Rep, _Period>& _Rel_time)
    {    // try to lock non-exclusive for relative time
    return (try_lock_shared_until(_Rel_time
        + chrono::steady_clock::now()));
    }

template<class _Time>
    bool _Try_lock_shared_until(_Time _Abs_time)
    {    // try to lock non-exclusive until absolute time
    auto _Can_acquire = [this] {
        return (!_Writing && _Readers < _Max_readers); };

    unique_lock<mutex> _Lock(_Mymtx);

    if (!_Write_queue.wait_until(_Lock, _Abs_time, _Can_acquire))
        return (false);

    ++_Readers;
    return (true);
    }

template<class _Clock,
    class _Duration>
    bool try_lock_shared_until(
        const chrono::time_point<_Clock, _Duration>& _Abs_time)
    {    // try to lock non-exclusive until absolute time
    return (_Try_lock_shared_until(_Abs_time));
    }

bool try_lock_shared_until(const xtime *_Abs_time)
    {    // try to lock non-exclusive until absolute time
    return (_Try_lock_shared_until(_Abs_time));
    }

void unlock_shared()
    {    // unlock non-exclusive
    _Read_cnt_t _Local_readers;
    bool _Local_writing;

        {    // unlock before notifying, for efficiency
        lock_guard<mutex> _Lock(_Mymtx);
        --_Readers;
        _Local_readers = _Readers;
        _Local_writing = _Writing;
        }

    if (_Local_writing && _Local_readers == 0)
        _Read_queue.notify_one();
    else if (!_Local_writing && _Local_readers == _Max_readers - 1)
        _Write_queue.notify_all();
    }

shared_timed_mutex(const shared_timed_mutex&) = delete;
shared_timed_mutex& operator=(const shared_timed_mutex&) = delete;
private:
mutex _Mymtx;
condition_variable _Read_queue, _Write_queue;
_Read_cnt_t _Readers;
bool _Writing;
};

class stl_condition_variable_win7 final : public stl_condition_variable_interface
{
public:
    stl_condition_variable_win7()
    {
        __crtInitializeConditionVariable(&m_condition_variable);
    }

    ~stl_condition_variable_win7() = delete;
    stl_condition_variable_win7(const stl_condition_variable_win7&) = delete;
    stl_condition_variable_win7& operator=(const stl_condition_variable_win7&) = delete;

    virtual void destroy() override {}

    virtual void wait(stl_critical_section_interface *lock) override
    {
        if (!stl_condition_variable_win7::wait_for(lock, INFINITE))
            std::terminate();
    }

    virtual bool wait_for(stl_critical_section_interface *lock, unsigned int timeout) override
    {
        return __crtSleepConditionVariableSRW(&m_condition_variable, static_cast<stl_critical_section_win7 *>(lock)->native_handle(), timeout, 0) != 0;
    }

    virtual void notify_one() override
    {
        __crtWakeConditionVariable(&m_condition_variable);
    }

    virtual void notify_all() override
    {
        __crtWakeAllConditionVariable(&m_condition_variable);
    }

private:
    CONDITION_VARIABLE m_condition_variable;
};

You can see that std::shared_timed_mutex is implemented in std::condition_value.

This is a huge difference.

So Let's check the performance of two of them.

TEST RESULTS

This is the result of read/write test for 1000 millisecond.

std::shared_mutex processed read/write over 2 times more than std::shared_timed_mutex.

In this example, the read / write ratio is the same, but the read rate is more frequent than the write rate in real.
Therefore, the performance difference can be larger.

the code below is the code in this example.

void useSTLSharedMutex()
{
    std::shared_mutex shared_mtx_lock;

    std::vector<std::thread> readThreads;
    std::vector<std::thread> writeThreads;

    std::list<int> data = { 0 };
    volatile bool exit = false;

    std::atomic<int> readProcessedCnt(0);
    std::atomic<int> writeProcessedCnt(0);

    for (unsigned int i = 0; i < std::thread::hardware_concurrency(); i++)
    {

        readThreads.push_back(std::thread([&data, &exit, &shared_mtx_lock, &readProcessedCnt]() {
            std::list<int> mydata;
            int localProcessCnt = 0;

            while (true)
            {
                shared_mtx_lock.lock_shared();

                mydata.push_back(data.back());
                ++localProcessCnt;

                shared_mtx_lock.unlock_shared();

                if (exit)
                    break;
            }

            std::atomic_fetch_add(&readProcessedCnt, localProcessCnt);

        }));

        writeThreads.push_back(std::thread([&data, &exit, &shared_mtx_lock, &writeProcessedCnt]() {

            int localProcessCnt = 0;

            while (true)
            {
                shared_mtx_lock.lock();

                data.push_back(rand() % 100);
                ++localProcessCnt;

                shared_mtx_lock.unlock();

                if (exit)
                    break;
            }

            std::atomic_fetch_add(&writeProcessedCnt, localProcessCnt);

        }));
    }

    std::this_thread::sleep_for(std::chrono::milliseconds(MAIN_WAIT_MILLISECONDS));
    exit = true;

    for (auto &r : readThreads)
        r.join();

    for (auto &w : writeThreads)
        w.join();

    std::cout << "STLSharedMutex READ :           " << readProcessedCnt << std::endl;
    std::cout << "STLSharedMutex WRITE :          " << writeProcessedCnt << std::endl;
    std::cout << "TOTAL READ&WRITE :              " << readProcessedCnt + writeProcessedCnt << std::endl << std::endl;
}

void useSTLSharedTimedMutex()
{
    std::shared_timed_mutex shared_mtx_lock;

    std::vector<std::thread> readThreads;
    std::vector<std::thread> writeThreads;

    std::list<int> data = { 0 };
    volatile bool exit = false;

    std::atomic<int> readProcessedCnt(0);
    std::atomic<int> writeProcessedCnt(0);

    for (unsigned int i = 0; i < std::thread::hardware_concurrency(); i++)
    {

        readThreads.push_back(std::thread([&data, &exit, &shared_mtx_lock, &readProcessedCnt]() {
            std::list<int> mydata;
            int localProcessCnt = 0;

            while (true)
            {
                shared_mtx_lock.lock_shared();

                mydata.push_back(data.back());
                ++localProcessCnt;

                shared_mtx_lock.unlock_shared();

                if (exit)
                    break;
            }

            std::atomic_fetch_add(&readProcessedCnt, localProcessCnt);

        }));

        writeThreads.push_back(std::thread([&data, &exit, &shared_mtx_lock, &writeProcessedCnt]() {

            int localProcessCnt = 0;

            while (true)
            {
                shared_mtx_lock.lock();

                data.push_back(rand() % 100);
                ++localProcessCnt;

                shared_mtx_lock.unlock();

                if (exit)
                    break;
            }

            std::atomic_fetch_add(&writeProcessedCnt, localProcessCnt);

        }));
    }

    std::this_thread::sleep_for(std::chrono::milliseconds(MAIN_WAIT_MILLISECONDS));
    exit = true;

    for (auto &r : readThreads)
        r.join();

    for (auto &w : writeThreads)
        w.join();

    std::cout << "STLSharedTimedMutex READ :      " << readProcessedCnt << std::endl;
    std::cout << "STLSharedTimedMutex WRITE :     " << writeProcessedCnt << std::endl;
    std::cout << "TOTAL READ&WRITE :              " << readProcessedCnt + writeProcessedCnt << std::endl << std::endl;
}

Unit Testing in C++

Recursive Mutex

decltype

Using std::unordered_map

As the name goes, the elements in unordered map are not stored in sorted sequence. They are stored according to their hash values and hence, usage of unordered map has many benefits such as it only takes O(1) to search any item from it. It is also faster than other map containers. It is also visible from the example that it is very easy to implement as the operator ( [] ) helps us to directly access the mapped value.

Digit separators

C++ function "call by value" vs. "call by reference"

Basic input/output in c++

The standard library <iostream> defines few streams for input and output:

|stream | description                      |
|-------|----------------------------------|
|cin    | standard input stream            |
|cout   | standard output stream           |
|cerr   | standard error (output) stream   |
|clog   | standard logging (output) stream |

Out of four streams mentioned above cin is basically used for user input and other three are used for outputting the data. In general or in most coding environments cin (console input or standard input) is keyboard and cout (console output or standard output) is monitor.

cin >> value

cin   - input stream
'>>'  - extraction operator
value - variable (destination)

cin here extracts the input entered by the user and feeds in variable value. The value is extracted only after user presses ENTER key.

cout << "Enter a value: "

cout              - output stream
'<<'              - insertion operator
"Enter a value: " - string to be displayed

cout here takes the string to be displayed and inserts it to standard output or monitor

All four streams are located in standard namespace std so we need to print std::stream for stream stream to use it.

There is also a manipulator std::endl in code. It can be used only with output streams. It inserts end of line '\n' character in the stream and flushes it. It causes immediately producing output.

Stream manipulators

Manipulators can be used in other way. For example:

  1. os.width(n); equals to os << std::setw(n);
    is.width(n); equals to is >> std::setw(n);

  1. os.precision(n); equals to os << std::setprecision(n);
    is.precision(n); equals to is >> std::setprecision(n);

  1. os.setfill(c); equals to os << std::setfill(c);

  1. str >> std::setbase(base); or str << std::setbase(base); equals to
str.setf(base ==  8 ? std::ios_base::oct :
            base == 10 ? std::ios_base::dec :
                base == 16 ? std::ios_base::hex :
                     std::ios_base::fmtflags(0),
         std::ios_base::basefield);

  1. os.setf(std::ios_base::flag); equals to os << std::flag;
    is.setf(std::ios_base::flag); equals to is >> std::flag;

    os.unsetf(std::ios_base::flag); equals to os << std::no ## flag;
    is.unsetf(std::ios_base::flag); equals to is >> std::no ## flag;

    (where ## - is concatenation operator)

    for next flags: boolalpha, showbase, showpoint, showpos, skipws, uppercase.

  1. std::ios_base::basefield.
    For flags: dec, hex and oct:
  • os.setf(std::ios_base::flag, std::ios_base::basefield); equals to os << std::flag;
    is.setf(std::ios_base::flag, std::ios_base::basefield); equals to is >> std::flag;
    ( 1 )

  • str.unsetf(std::ios_base::flag, std::ios_base::basefield); equals to str.setf(std::ios_base::fmtflags(0), std::ios_base::basefield);
    ( 2 )

  1. std::ios_base::adjustfield.
    For flags: left, right and internal:
  • os.setf(std::ios_base::flag, std::ios_base::adjustfield); equals to os << std::flag;
    is.setf(std::ios_base::flag, std::ios_base::adjustfield); equals to is >> std::flag;
    ( 1 )

  • str.unsetf(std::ios_base::flag, std::ios_base::adjustfield); equals to str.setf(std::ios_base::fmtflags(0), std::ios_base::adjustfield);
    ( 2 )

( 1 ) If flag of corresponding field previously set have already unset by unsetf.
( 2 ) If flag is set.

  1. std::ios_base::floatfield.
  • os.setf(std::ios_base::flag, std::ios_base::floatfield); equals to os << std::flag;
    is.setf(std::ios_base::flag, std::ios_base::floatfield); equals to is >> std::flag;

    for flags: fixed and scientific.

  • os.setf(std::ios_base::fmtflags(0), std::ios_base::floatfield); equals to os << std::defaultfloat;
    is.setf(std::ios_base::fmtflags(0), std::ios_base::floatfield); equals to is >> std::defaultfloat;

  1. str.setf(std::ios_base::fmtflags(0), std::ios_base::flag); equals to str.unsetf(std::ios_base::flag)

    for flags: basefield, adjustfield, floatfield.

  1. os.setf(mask) equals to os << setiosflags(mask);
    is.setf(mask) equals to is >> setiosflags(mask);

    os.unsetf(mask) equals to os << resetiosflags(mask);
    is.unsetf(mask) equals to is >> resetiosflags(mask);

    For almost all mask of std::ios_base::fmtflags type.

C++ Containers

Arithmitic Metaprogramming