Hi all,

I have started finding lots of uses for abstract classes to define interfaces for common functionality using polymorphism and i know that using polymorphism and virtual functions incurrs some additional call cost of functions. So what im wondering is can i get the best of both worlds? consider the following basic example

class iShape
{
public:
  iShape();
  virtual ~iShape();

  virtual float GetArea( void ) = 0;
  virtual 3dPos GetPos( void ) = 0;
};

Using an abstract class like this gives me a good interface and hides the implimentation and all that stuff and i really like this. But i dont like the virtual call overhead. In the usecase i have in mind is actually custom state machines. I want to be able to keep a generic interface like how STL vector, list and string do but i dont need the hidden data type doing the work polymorphism provides as i will know in each situation exactly what kind of state machine i will need. So can i omit the virtual from the class and just inherit it and use the inherited class type or is there a better way? So something like

class BasicShape
{
public:
  BasicShape();
  virtual ~BasicShape();

  float GetArea( void );
  3dPos GetPos( void );
};

class Square : public BasicShape
{
  Square ();
  virtual ~Square ();

  float GetArea( void );
  3dPos GetPos( void );
}

If i do this as above i dont see how i benifit from code reuse as i have to redefine the funtions in the base class to overwrite thier implimentation so im not sure how to tackle this kind of situation.

Any help / advice greatly appreciated

The real answer is that the overhead of calling a virtual function is so low that it is unlikely to cause you a problem in the majority of cases. This is definitely a case of trying to solve a problem before you know it exists.

In all likely hood your implementation with virtual functions will work plenty fast enough, you should build your solution correctly, with virtual functions, and then only start looking for ways of speeding it up if it turns out that it doesn't meet its performance criteria (go fast enough).

The problem with your proposed solution is that you can't call the functions through the base class, or more likely a base class pointer or reference. If you try you will call the functions in the base class rather than the functions in the derived class. i.i. std::vector<BasicShape&> will not have the behaviour you expect or probably want.

On the other hand if you don't need to call the functions through the base class then why do you need a base class at all, get rid of it and then at least there wont be an issue in the code waiting to rear its head when someone who doesn't know the functions aren't virtual treats them like they are.

Interesting question, it warrants a detailed response. So, let me address some issues before providing more concrete solutions.


>>I have started finding lots of uses for abstract classes to define interfaces for common functionality using polymorphism

RED FLAG!! Maybe you misspoke, because otherwise you are hinting at a very common mistake, what I call "derived class intersection" meaning that you take different concrete classes that your software already has (or that you have in mind) and you find things that they have in common (i.e. the intersection of the functionality sets) to lump them into some base-classes (as in, having base classes with the common functionalities). This is a problem for many reasons. Mostly, it is a problem with the design methodology that will have bad consequences going forward. The proper design methodology involves assessing the requirements that an algorithm or data structure imposes on the objects on which it must operate, and then, create abstractions that lump related requirements together as a base-class or interface or other things (see later). Then, you can create concrete (or final) classes that fulfill those requirements and thus, implement the abstraction / interface. The point is, the choice of functionality that should make up an interface should be solely motivated by how this object is intended to be used by the algorithms or data structures. What functionalities turn out to be common between different derived classes is irrelevant in the design of your base classes or interfaces. Down the line, with this type of "derived class intersection" methodology, you will end up creating base classes that either require too much of its derived classes (i.e. you create a new derived class that lacks one of the "common" functionalities), or they turn out to be functionally useless because their functionalities don't match the inherent requirements of the algorithms that you have. This leads to multiple design cycles, awkward fudging of interfaces (i.e. providing dummy implementations for the required functionality that a derived class cannot sensibly provide), and, overall, things are error-prone and hard to maintain. So, beware of this common mistake in design methodology.


>>and i know that using polymorphism and virtual functions incurrs some additional call cost of functions.

Yes and no. The cost is really not that high, but it will become significant when you are forced to use virtual functions for very simple methods (e.g. one-liner functions). The problem with using a pure OOP design is that it often forces you to use that all the way, including down to the simpler operations for which the virtual function calls will represent, relatively speaking, a more significant overhead. Note also, that most of the overhead of virtual functions is not due to the looking up of the function pointer in the virtual table, it is in the lack of opportunity for the compiler to optimize the code because of the lack of introspection that it can do into the function that will ultimately be called (common optimizations like return-value optimization, temporary elision, and function inlining because very difficult to do and the compiler must remain cautious, and thus, pessimizing).


>>So what im wondering is can i get the best of both worlds?

Again, yes and no, it depends what you define as "the best", but mostly, it depends on the level of flexibility you need. First, you have to understand that "polymorphism" is a software engineering concept that really amounts to the ability to substitute an object of any of a number of different types into an algorithm or data structure. Basically, polymorphism is synonymous with "substitutability". Second, you have to know that in C++, we talk about two forms of polymorphism: static and dynamic. Static refers to the fact that the substitutions are resolved at compile-time. Dynamic refers to the fact that the substitutions are resolved at run-time. The type of polymorphism you are referring to (and that is colloquially referred to as just "polymorphism" in the OOP world) is in fact "dynamic polymorphism", and the mechanism by which the run-time substitutions are resolved is through the virtual table with the virtual functions you find in it (a look-up process that is formally called "dynamic dispatching"). So, you can get "the best of both worlds" if you mean to get polymorphism and not suffer the overhead of dynamic dispatching, the solution is simply to use static polymorphism (how to do that will follow). The one thing that you will be sacrificing is the run-time flexbility of dynamic polymorphism (being able to substitute different polymorphic objects at run-time). However, static polymorphism offers a lot of additional benefits and flexibility features that simply are not possible with dynamic polymorphism (like type-parametrization, type-mappings, meta-programs, and scalable multiple-dispatching).


>>I want to be able to keep a generic interface like how STL vector, list and string do

The STL is mostly designed in the style of "Generic Programming", which is the bread-and-butter of static polymorphism (i.e. GP is to static polymorphism what OOP is to dynamic polymorphism).


>>but i dont need the hidden data type doing the work polymorphism provides as i will know in each situation exactly what kind of state machine i will need.

It sounds to me like you are basically saying you will not need run-time polymorphism. If you know that you will always want to just create the state-machine programmatically (like setting up the state-machine at the start of the main() function and then executing it), then you can toss dynamic polymorphism out the window and switch to static polymorphism, sparring yourself the overhead.


>>So can i omit the virtual from the class and just inherit it and use the inherited class types

Well, your compiler is going to complain about this (or did you not turn on the warnings to the maximum like you always should?). If you inherit from a base class and you "override" functions that are not virtual in the base class, you should get a warning like "member function X in derived class Y hides the base class method Z::X". Additionally, you might get weird behaviour if you are not cautious in your use of the derived class. Any function which expects a base class object by reference or pointer will not resolve any polymorphism whatsoever, it will just call the base-class function, because there is no mechanism for it to find out that the object is actually a derived class object which actually has another version of the function (that is what "dynamic dispatching" does, and you turn it on by making the member function virtual).

>>or is there a better way?

Thought you'd never ask.. Yes! I've been eluding to it all along, through Generic Programming and Static Polymorphism. In concrete terms, this means: templates!

You can do something like this:

template <typename ShapeType>
void printAreaAndPos(const ShapeType& aShape) {
  std::cout << "The area of the shape is " << aShape.GetArea() << std::endl;
  std::cout << "The position of the shape is " << aShape.GetPos() << std::endl;
};

Now, the above function expects an object of some type ShapeType (which could be anything) which has two functions, GetArea() and GetPos(), that both output some kind of value that can be printed to std::cout . This is, in its most basic form, static polymorphism, because you get substitutability in the sense that any type of object can be given to that function, as long as it has the few functions that are required by that function, and that the substitution is performed at compile-time by the compiler via the mechanisms of C++ templates and function overloading. We would generally say that the "printAreaAndPos" is a generic function (works for any type that fulfill some basic contract), and thus the term "Generic Programming" in its most basic sense.

You might ask: how do you define those requirements, like you would with the virtual functions of a base-class? The answer is "concepts" which are an analog of base-classes used in GP. One good example of what concepts are is that of iterator concepts in the STL. All these things like "RandomAccess", "Bidirectional", "Forward", "Input" and "Output" are concepts that an iterator can fulfill, each concept requiring a number of "valid expressions" like incrementing (Forward), or adding some value (RandomAccess). In the above example, the valid expressions might be something like aShape.GetArea(); and aShape.GetPos(); . This is essentially what you require that a type must provide, if the type provides those things, we say that the "type X models the Y concept", like " std::vector<T>::iterator models the RandomAccessIterator concept". Concepts are even more useful when combined with a library like the Boost.Concept-Check library which can preemptively cause compilation failures to signify that a given type does not model the required concept(s) (rather than letting the compilation failure occur way down in the guts of the library or function that was called). Concepts are basically the way you formalize the abstractions in GP.

Generic programming has a ton of other benefits and neat techniques (and is indeed way more powerful than object-oriented programming, as long as you don't need run-time flexibility), but I can't drag on forever on the topic. If you want to do compile-time polymorphism, this is basically the way to do it. You will need to use templates because any other solution you might conjure (like the one you posted) is either not gonna work the way you think it does, won't work at all, or will be very hard to maintain.


If you are not willing or prepared to use templates, and you still want to have some form of polymorphism, then you should stick to OOP (with virtual functions), and you will probably find that the overhead is not that significant (depends on how fine-grained your use of polymorphism is).

commented: Very clear and descriptive, thanks +6

Hi again,

Thanks very much for the help, to banfa i totally agree with what you were saying and i admit yes im trying to solve a non issue but i was just looking for a way to provide a standard interface for a collection of classes (the base class) but without the runtime overhead of virtual functions as i will know in advance which type i am using in each case.

Mike, that is a great post and it has changed my view of both polymorphism and templates, i had always seen them as solving different problems, i seen template code as a way of reducing the amount of code the developer has to write like in STL so the same code works with different data types, but i had never drawn the connection of how it provides static polymorphism.

I think i will give this a go with templates as per how i now understand it to work i think its exactly what im looking for. and also i will look at the boost concept lib thanks for that suggestion also.

All in all i think i need both templates and polymorphism for this solution as my state machine i will know its type at compile time but the state objects need the run time linking,...

Again many thanks to both of you for your help and with this, the quest to be a competent programmer continues :)

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.