Simple C++11 – Part II – Class declarations

In the previous part, I’ve shown my guidelines for setting up compilation units. When writing simple application code with C++11, either classes or free-functions should be your main building blocks. Therefor, in this part, I will focus on what to look out for when writing class declarations.

While templates can be very useful, they do not scale well as the code base gets larger. Metaprogramming or other niche styles have their places, too, but I like to look at those as a means to create language extensions rather than principal implementation tools.

Avoid inline implementations

…especially in header files. It can be tempting to write classes solely in the header file. In fact, it has almost become a sign of quality for parts of C++ code to be header only. But this scales badly in most cases, and evolving such a code-base will result in a dramatic explosion of compile times. Always splitting classes into a declaration and definition acts as a first-level compile- firewall and dependency-breaker. Users of your class no longer need to worry about changes in the implementation of the member functions of that class. Note that those changes are often indirect: a change only affects a class that is used in the implementation of your class’ member functions. By splitting the declaration and definition, users of your class do not have to be recompiled.

But why stop at the compiler? The same argument holds for programmers. If you start to split interface and implementation on this level, you automatically provide ‘reader-firewalls’ as well. By just providing a clean header file, you are giving readers sort of a manual for your class. No need to look at the implementation at all, if the interface is well-defined.

Inline code definition is also the main reason against excessive use of templates. Yes, they grant a lot of flexibility, but you pay a hefty price which needs to be justified by an enormous reduction of complexity elsewhere. In general, templates are a bit too powerful for their own good, which is why they need extra moderation.

Always declare implicit functions

Implicitly declared functions seem comfortable, but they have a few implications that are hard to understand. First of, if an implicit function gets generated for your class, it will be generated as inline. This means that the implementation becomes a dependency to all users of your class. This can have very subtle effects such as this:

#include <vector>
class Entry;

class EntryManager {
public:
  EntryManager(EntryGenerator& generator);
  int getEntryCount() const;
  std::string getIDForEntry(int index) const;
private:
  std::vector<Entry> mData;
};

On the surface, it looks like there should be no dependency (other than the name) on MyEntry when including this header. But there is!
The destructor is not declared so it will get generated – as inline. Because deletion of a vector requires the held type to be complete, any place that needs to be able to destruct a MyEntryManager also needs to know how to destruct MyEntry, which is not intended at all. Remember there’s a total of six functions that can be implicitly generated! Because of that, there are analogous problems for copy-construction, assignment, move-construction and move-assignment.

To avoid these problems, either delete the function explicitly in the header, default it in the implementation file, or actually implement it. You rarely need to do the latter, so I advise to default all the ones you need, and delete the rest:

#include <vector>
class Entry;

class EntryManager {
public:
  EntryManager(EntryGenerator& generator);
  EntryManager(EntryManager const&)=delete;
  EntryManager& operator=(EntryManager const&)=delete;
  EntryManager(EntryManager&& rhs);
  EntryManager& operator=(EntryManager&& rhs);
  ~EntryManager();
  int getEntryCount() const;
  std::string getIDForEntry(int index) const;
private:
  std::vector<MyEntry> mData;
};

And somewhere in the implementation file:

EntryManager::EntryManager(EntryManager&& rhs) = default;
EntryManager::~EntryManager() = default;
EntryManager& EntryManager::operator=(EntryManager&& rhs) = default;

This has another nice side effect because the vector-template gets instantiated into that object file and does not “bloat” all use-sites.

Exactly one public function and one private data section per class

..starting with the public section. This is where you address the next programmer that has to read your class. And it should be the only place for him to look.

I avoid private member functions because they cannot be tested easily and can add hidden compile-time dependencies to a project. Why should a user of your class recompile if you change an implementation detail? For small and trivial implementation helpers, the unnamed-namespace in the implementation file is a much better place. If those helpers become larger or more complex, it is a better idea to implement them in a collaborating class, which can be tested and reused.

Protected member functions split your interface to two parts, one exclusively for derived classes and one for everyone (including derived classes). This is very rarely needed, and in almost all of those cases, a separate interface will scale better (although it is slightly harder to implement).

Either an interface or an implementation

So far, I have left inheritance out of the picture and only talked about concrete classes. Inheritance is actually rarely needed, composition often suffices. But if it is needed, make sure that a class is either concrete and final (implementations), or has a complete and minimal set of pure-virtual member functions (interfaces). This will result in shallow hierarchies and easily understood interfaces. Remember that inheritance is not a tool for sharing code from the classes you implement, but for the code using those classes – i.e. where the Liskov Substition Principle holds.

Now it gets really easy to implement new classes in the hierarchy: Just implement all the functions in the interface. No more questioning whether to leave the default behaviour or override. You will also automatically tend towards clearer separation of components – things that need to be polymorphic move to the interface, other  functionality merely uses it.

This pattern is useful even when polymorphy is not needed. Such small interfaces devoid of any implementation detail can act as another compiler firewall. Collaborators can work with just the interface and do not have to be recompiled when the implementation changes. Also, the interface can be implemented for mock or fake objects in testing.

Conclusion

This concludes the second part of the series. I originally intended it to be about how to write a whole class, but that would have been too much to digest for one post. I am well aware that some of these guidelines can stir quite the controversy in the C++ community. For example, declaring the implicit functions seems to be in conflict with the recently popular rule of zero. Scott Meyers had similar concerns, but does not quite touch the inline aspect.

For me personally, these guidelines have helped tremendously, especially when scaling to bigger code-bases. But as before, I am curious what others are thinking about this!

Small is beautiful – CherryPy Microframework

We are developing web applications for our clients using various different frameworks and technologies. The choice depends on several different factors like hosting, the clients IT infrastructure and administration and of course project scope. Some of our successful web projects use the Grails or Ruby on Rails with JRuby full stack frameworks. While they provide a ton of powerful features they also intrinsically carry some complexity. This is not always needed nor wanted, some projects might fare well with much less. Sometimes the scope of a project is not entirely clear so choosing a lean alternative you can gradually extend and refine may be the right fit.

Full stack frameworks

A full stack framework usually delivers built-in solutions for persistence/database access, domain modelling, routing, html-templating and so on. If you know for sure that you will need all the provided features from the start and that they are a good fit feel free to choose a full stack framework like Rails, Grails, Play or Django. If you need less or there is no convincing solution for your needs start with a minimal system that delivers value to your clients.

Microframeworks

Microframeworks provide only the basic functionality for routing and handling requests. No templating, no databases, no session management etc. attached. We made really good experiences with microframeworks like Nancy for .NET or CherryPy for Python. You start very simple and are up an running in minutes. If most of your GUI is client-side you do not need templating and you do not have it from the start. You do not need a database, well, you do not have one. Your server side logic revolves around REST, there you go!

The key point is that you are not stuck with this minimal set of support but you can easily extend the framework with features if need be. And usually you have several options per concern to choose from. For templating in Python there are many different solutions like Mako, Jinja2Genshi and others. Need only file-based persistence, want to manage your database in SQL or need an object-relational mapper (ORM) – everything is up to you.

CherryPy

Our experience with CherryPy was very positive. It is easy to run standalone using the integrated web server. You can also run it behind any WSGI-compatible web server because a CherryPy application automatically serves as a WSGI application. This is very nice if you need the power of a native web server and the ease and flexibility of a Python microframework. CherryPy is also very flexible when it comes to request routing offering much convenience with the default routing and method annotations to expose actions on the one hand and _cp_dispatch or MethodDispatcher on the other. It feels easy to extend the solution bit by bit as needed, there is seldom something in your way. Configuration is easy and powerful and the documentation gets you up to speed most of the time.

Conclusion

Microframeworks let you choose what you need and which implementation best fits your needs. You do not carry all that complexity with you from the start and easily pull more framework support in as you go choosing the most appropriate for your current situation. Your choices may differ in the next project since every project is different.

Recap of the Schneide Dev Brunch 2015-12-13

brunch64-borderedTwo weeks ago, we held another Schneide Dev Brunch, a regular brunch on the second sunday of every other (even) month, only that all attendees want to talk about software development and various other topics. So if you bring a software-related topic along with your food, everyone has something to share. The brunch was small this time, but with enough stuff to talk about. As usual, a lot of topics and chatter were exchanged. This recapitulation tries to highlight the main topics of the brunch, but cannot reiterate everything that was spoken. If you were there, you probably find this list inconclusive:

Company strategies

Our first topic was about the changes that happen in company culture once a certain threshold is overstepped. The founders lose touch with their own groundwork and then with their own employees. Compliance frameworks are installed and then enforced, even if the rules make no sense in specific cases. A new hierarchy layer, the middle management, springs into existence and is populated by people that never worked on the topic but make all the decisions. The brightest engineers are promoted to a management position and find themselves helpless and overburdened. Adopting a new technology or tool takes forever now. The whole company stalls technologically.

Sounds familiar? We discussed several cases of this dramaturgy and some ways around it. One possible remedy is to never grow big enough. Stay small, stay fast and stay agile. That’s the Schneide way.

Code analysis with jDeodorant

We devoted a lot of time on getting to know jDeodorant, an eclipse-based code smell detection tool for Java. We grabbed a real project and analyzed it with the tool. Well, this step alone took its time, because the plugin cannot be operated in an intuitive manner. It presents itself as a collection of student thesis work without overarching narrative and a clear disregard of expectation conformity. If several experienced eclipse users cannot figure out how a tool works despite having used similar tools for years, something is afoul. We got past the bad user experience by viewing several screencasts, the most noteworthy being a plain feature demonstration.

Once you figure out the handling, the tool helps you to find code smells or refactoring opportunities. In our case, most of the findings were false alarms or overly picky. But in two cases, the tool provided a clear hint on how to make the code better (both being feature envies). If the project would really benefit from the proposed refactorings is subject for discussion. The tool acts like a very assiduous colleague in a code review when every improvement gets rewarded.

We really don’t know how to rate this tool. It’s hard to learn and provides little value on first sight, but might be useful on larger legacy code bases. We’ll keep it at the back of our minds.

Naming and syntax rules

During the discussion about jDeodorant, we talked about naming schemes and other syntax rules. We remembered horrific conventions like prefixed I and E or suffixed Exception. The last one got some curious looks, because it’s still a convention in the Java SDK and some names won’t make it without, like the beloved IOException. But what about the NullPointerException? Wouldn’t NullPointer describe the problem just as good? Kevlin Henney already talked about this and other ineffective coding habits (if you have audio degradation halfway through, try another video of the talk). It’s a good eye-opener to (some of) the habits we’ve adopted without questioning them. But challenging the status quo is a good thing if done in reasonable doses and with a constructive attitude.

Unit testing

When we played around with jDeodorant and surfed the code of the project that served as our testing ground, the Infinitest widget raised some questions. So we talked about Continuous Testing, unit tests and some pitfalls if your tests aren’t blazing fast. The eclipse plugin for MoreUnit was mentioned soon. Those two plugins really make a difference in working with tests. Especially the unannounced shortcut Ctrl+J is very helpful. I’ve even blogged about the topic back in 2011.

Epilogue

As usual, the Dev Brunch contained a lot more chatter and talk than listed here. The number of attendees makes for an unique experience every time. We are looking forward to the next Dev Brunch at the Softwareschneiderei. And as always, we are open for guests and future regulars. Just drop us a notice and we’ll invite you over next time.

Simple C++11 – Part I – Unit Structure

C++ has long had the stigma of an overlay complex and unproductive language. Lately, with the advent of C++11, things have brightened a bit, but there are still a lot of misconceptions about the language. I think this is mostly because C++ was taught in a wrong way. This series aims to show my, hopefully somewhat simpler, way of using C++11.

Since it is typically the first thing I do when starting a new project, I will start with how I am setting up a new compile unit, e.g. a header and compile unit pair.

Note that I will try not to focus on a specific C++11 paradigm, such as object-oriented or imperative. This structure seems to work well for all kinds of paradigms. But without much further ado, here’s the header file for my imaginary “MyUnit” unit:

MyUnit.hpp

#pragma once

#include <vector>
#include "MyStuff.hpp"

namespace MyModule { namespace MyUnit {

/** Does something only a good bar could.
*/
std::vector<float> bar(int fooCount);

/** Foo is an integral part of any program.
    Be sure to call it frequently.
*/
void foo(MyStuff::BestType somethingGood);

}}

I prefer the .hpp file ending for headers. While I’m perfectly fine with .h, I think it is helpful to differentiate pure C headers from C++ headers.

#pragma once

I’m using #pragma once here instead of include guards. It is not an official part of the standard, but all the big compilers (Visual C++, g++ and clang) support it, making it a de-facto standard. Unlike include guards, you only have to add only one line, which says exactly what you want to achieve with it. You do not have to find a unique identifier for your include guard that will most certainly break if you rename the file/unit. It’s more readable, more resilient to change and easier to set up.

Namespaces

I like to have all the contents of a unit in a single namespace. The actual structure of the namespaces – i.e. per unit or per module or something else entirely depends on the specifics of the project, but filling more than one namespace is a guarantee for chaos. It’s usually a sign that the unit should be broken up into smaller pieces. An exception to this would be the infamous “detail” namespace, as seen in many of the Boost libraries. In that case, the namespace is not used to structure the API, but to explicitly omit things from the API that have to be visible for technical reasons.

Documentation

Documentation goes into the header, not into the implementation. The header describes the API, not only to the compiler, but also to humans. It is by no means an implementation detail, but part of the seam that isolates it from the rest of the code. Note that this part of the documentation concerns the API contract only, never the implementation. That part goes into the .cpp file.

But now to the implementation file:

MyUnit.cpp

#include "MyUnit.hpp"

#include "CoolFunctionality.hpp"

using namespace MyModule;
using namespace MyUnit;

namespace {

int helperFunction(float rhs)
{
  /* ... */
}

}// namespace

std::vector<float> MyUnit::bar(int fooCount)
{
  /* ... */
}

void MyUnit::foo(MyStuff::BestType somethingGood)
{
  /* ... */
}

Own #include first

The only rule I have for includes is that the unit’s own include is always the first. This is to test whether the header is self-sufficient, i.e. that it will compile without being in the context of other headers or, even worse, code from an implementation file. Some people like to order the rest of their includes according to their “origin”, e.g. sections for system headers or library headers. I think imposing any extra order here is not needed. If anything, I prefer not waste time sorting include directives and just append an include when I need it.

Using namespace

I choose using-directives of my unit’s namespaces over explicitly accessing the namespaces each time. Unlike the headers, the implementation file lives in a locally defined context. Therefore, it is not a problem to use a very specific view onto the unit. In fact, it would be a problem to be overly generic. The same argument also holds for other “local” modules that this unit is only using, as long as there are no collisions. I avoid using namespaces from external libraries to mark the library boundary (such as std, boost etc.).

Unnamed namespace

The unnamed namespace contains all the implementation helpers specific to this unit. It is quite common for this to contain a lot of the “meat” of an actual unit, while the unit’s visible functions merely wrap and canonize the functionality implemented here. I try to keep only one unnamed namespace in each file, to have a clear separation of what is supposed to be visible to the outside – and what is not.

Visible implementation

The implementation of the visible API of the module is the most obvious part of the .cpp file. For consistency reasons, the order of the functions should be the same as in the header.

I’d advice against implementing in a file wide open namespace. That means balancing an unnecessary pair of parenthesis over the whole implementation file.  Also, you can not only define functions and types, but also declare them – this leads to a function further down in the implementation to see a different namespace than one before it.

Conclusion

This concludes the first part. I’ve played with the thought of using a 3-piece setup instead, extending the header/implementation with a unit-test file, but have not gathered any sharable experience yet. This setup, however, has worked for me for a long time and with many different projects. Have you had similar – or completely different – setups that worked for you? Do tell!

Synchronizing asynchronous calls in JavaScript

In Node.js all I/O performing operations like HTTP requests, file access etc. are designed to be non-blocking. The functions for these operations usually take a callback function argument as last parameter, which will be called once the operation is finished. The asynchronous nature of these operations often requires special means of synchronization, as exemplified in this post.

Let’s assume we want to download a set of files from a list of URLs:

var urls = [
    'http://example.org/file1',
    'http://example.org/file2',
    'http://example.org/file3',
];

If we were to use a classic, synchronous blocking function for the download operation the implementation might look like this:

urls.forEach(function(url) {
    downloadSync(url);
    console.log('Downloaded ' + url);
});
console.log('Downloads complete.')

Output:

Downloaded http://example.org/file1
Downloaded http://example.org/file2
Downloaded http://example.org/file3
Downloads complete.

The program loops through the list of URLs and downloads one file after the other. Each subsequent download is only started after the previous download has finished. No two downloads are active at the same time. Finally, the program prints a message indicating that all downloads are complete.

Now we want to embrace the asynchronous programming style of Node.js. We have a different function, downloadAsync, which is a non-blocking asynchronous function. The function uses the callback convention of Node.js to let the programmer handle the completion of the asynchronous operation:

urls.forEach(function(url) {
    downloadAsync(url, function() {
        console.log('Downloaded ' + url);
    });
});
console.log('Downloads complete.')

Output:

Downloads complete.
Downloaded http://example.org/file2
Downloaded http://example.org/file3
Downloaded http://example.org/file1

The download operations are now started at the same time and finish in a different order, depending on how long it takes each file to download. The quickest download finishes first, the slowest last. The total time for all downloads to finish is probably shorter than before, because slower connections do not make faster connections wait for them to finish.

However, there is one problem with this code: the message “Downloads complete.” is shown before the downloads are actually completed, which is obviously not the intended behavior. The reason why this happens is because the control flow immediately continues after the forEach loop has started all the downloads.

The ‘async’ module

What we need to correct this behavior is a way to wait for all asynchronous download operations started by the loop to finish before we print the “Downloads complete” message.

The async module for JavaScript provides various synchronization mechanisms for exactly these kinds of situations. In our case we’re going to use async.each:

var async = require('async');

async.each(urls, function(url, callback) {
    downloadAsync(url, function() {
        console.log('Downloaded ' + url);
        callback();
    });
}, function done() {
    console.log('Downloads complete.')
});

The async.each function is similar to JavaScript’s forEach function. However, the function called for each iteration has an additional callback parameter, which must be called when each iteration is considered to be completed. The third parameter to async.each is also a callback function. This function is called after all iterations reported their completion. This is where we place the output of the “Downloads complete” message.

Conclusion

The async module provides a rich toolset for the synchronization of asynchronous operations in JavaScript. Go check it out!

Multi-Page TIFFs with C++

If you are dealing with high-speed cameras or other imaging equipment capable of producing many images in a short time you may find it handy to put many images into a single file. There are several reasons to do so:

  • Dealing with thousands of files in a single directory or spreading them over a directory hierarchy may be slow and cumbersome depending on the tools.
  • Storing many images together may communicate better that they belong together, e.g. to the same scan.
  • Handling and transmitting fewer files is often easier than juggling with many.

TIFF is a wide-spread lossless image format capable of handling many individual images in a single file. Many image viewers and image manipulation tools are able to work with multi-page TIFF files so you are quite flexible in working with such files.

But how do you produce these files from your programs? I found some solutions with different strengths and weaknesses:

Using Magick++

Magick++ is the C++ API for ImageMagick – a powerful image manipulation library. If you can hold all the images for one file in memory the code is easy and straightforward:

#include <string>
#include <Magick++.h>

class TiffWriter
{
public:
    TiffWriter(std::string filename);
    TiffWriter(const TiffWriter&) = delete;
    TiffWriter& operator=(const TiffWriter&) = delete;
    ~TiffWriter();
    
    void write(const unsigned char* buffer, int width, int height);

private:
    std::vector<Magick::Image> imageList;
    std::string filename;
};

TiffWriter::TiffWriter(std::string filename) : filename(filename) {}

// for example for a 8 bit gray image buffer
void TiffWriter::write(const unsigned char* buffer, int width, int height)
{
    Magick::Blob gray8Blob(buffer, width * height);
    Magick::Geometry size(width, height);
    Magick::Image gray8Image(gray8Blob, size, 8, "GRAY");
    imageList.push_back(gray8Image);
}

TiffWriter::~TiffWriter()
{
    Magick::writeImages(imageList.begin(), imageList.end(), filename);
}

The caveat is that you need to  hold all your images in memory before writing it to the file on disk. I did not manage to add and persist images on the fly to disk.

In our environment it was absolutely necessary to do so because of the amount of data and the I/O required to persist all image in time. So I had to implement a slightly more low-level solution using libtiff and its C API.

Using libtiff

#include <string>
#include <tiffio.h>

class TiffWriter
{
public:
    TiffWriter(std::string filename, bool multiPage);
    TiffWriter(const TiffWriter&) = delete;
    TiffWriter& operator=(const TiffWriter&) = delete;
    ~TiffWriter();
    
    void write(const unsigned char* buffer, int width, int height);

private:
    TIFF* tiff;
    bool multiPage;
    unsigned int page;
};

TiffWriter::TiffWriter(std::string filename, bool multiPage) : page(0), multiPage(multiPage)
{
    tiff = TIFFOpen(filename.c_str(), "w");
}

void TiffWriter::write(const unsigned char* buffer, int width, int height)
{
    if (multiPage) {
        /*
         * I seriously don't know if this is supposed to be supported by the format,
         * but it's the only we way can write the page number without knowing the
         * final number of pages in advance.
         */
        TIFFSetField(tiff, TIFFTAG_PAGENUMBER, page, page);
        TIFFSetField(tiff, TIFFTAG_SUBFILETYPE, FILETYPE_PAGE);
    }
    TIFFSetField(tiff, TIFFTAG_PLANARCONFIG, PLANARCONFIG_CONTIG);
    TIFFSetField(tiff, TIFFTAG_IMAGEWIDTH, width);
    TIFFSetField(tiff, TIFFTAG_IMAGELENGTH, height);
    TIFFSetField(tiff, TIFFTAG_SAMPLEFORMAT, SAMPLEFORMAT_UINT);
    TIFFSetField(tiff, TIFFTAG_ROWSPERSTRIP, TIFFDefaultStripSize(tiff, (unsigned int) - 1));

    unsigned int samples_per_pixel = 1;
    unsigned int bits_per_sample = 8;
    TIFFSetField(tiff, TIFFTAG_BITSPERSAMPLE, bits_per_sample);
    TIFFSetField(tiff, TIFFTAG_SAMPLESPERPIXEL, samples_per_pixel);

    std::size_t stride = width;
    for (unsigned int y = 0; y < height; ++y) {
        TIFFWriteScanline(tiff, buf + y * stride, y, 0);
    }

    TIFFWriteDirectory(tiff);
    page++;
}

TiffWriter::~TiffWriter()
{
    TIFFClose(tiff);
}

Note that line 14 is needed if you do not know the number of images to store in the file in advance!

Meet my Expectations!

A while ago I came across a particulary irritating piece of code in a somewhat harmlessly looking mathematical vector class. C++’s rare feature of operator overloading makes it a good fit for multi-dimensional calculations, so vector classes are common and I had already seen quite a few of them in my career. It looked something like this:

template <typename T>
class vec2
{
public:
  /* A few member functions.. */
  bool operator==(vec2 const& rhs) const;

  T x;
  T y;
};

Not many surprises here, except that maybe the operator==() should be a free-function instead. Whether the data members of the class are an array or named individually is often a point of difference between vector implementations. Both certainly have their merits. But I digress…
What really threw me off was the implementation of the operator==(). How would you implement it? Intuitively, I would have expected pretty much this code:

template <typename T>
bool vec2<T>::operator==(vec2 const& rhs) const
{
  return x==rhs.x && y==rhs.y;
}

However, what I found instead was this:

template <typename T>
bool vec2<T>::operator==(vec2 const& rhs) const
{
  if (x!=rhs.x || y!=rhs.y)
  {
    return false;
  }

  return true;
}

What is wrong with this code?

Think about that for a moment! Can you swiftly verify whether this boolean logic is correct? You actually need to apply De Morgan’s laws to get to the expression from the first implementation!
This code was not technically wrong. In fact, for all its technical purposes, it was working fine. And it seems functionally identical to the first version! Still, I think it is wrong on at least two levels.

Different relations

Firstly, it bases its equality on the inequality of its contained type, T. I found this quite surprising, so this already violated the POLA for me. I immediately asked myself: Why did the author choose to implement this based on operator!=(), and not on operator==()? After all, supplying equality for relations is common in templated C++, while inequality is inferred. In a way, this is more intuitive. Inequality already has the negation in its name, while equality is something “original”! Not only that, but why base the equality on a different relation of the contained type instead of the same? This can actually be a problem when the vector is instantiated on a type that supplies operator==(), but not operator!=() – thought that would be equally surprising. It turned out that the vector was only used on built-in types, so those particular concerns were futile. At least, until it is later used with a custom type.

Too many negations

Secondly, there’s the case of immediately returning a boolean after a condition. This alone is often considered a code-smell. It could be argued that this is more readable, but I don’t want to argue in favor of pure brevity. I want to argue in favor of clarity! In this case, that construct is basically used to negate the boolean expression, further obscuring the result of the whole function.
So basically, the function does a double negation (not un-equal) to express a positive concept (equal). And negations are a big source of errors and often lead to confusion.

Conclusion

You need to make sure to make the code as simple and clear as possible and avoids any surprises, especially when dealing with the relatively unconstrained context of C++ templates.  In other words, you need to make sure to meet the expectations of the naive reader as well as possible!