Because I honestly feel like I'll never understand them(it's been years by the way).

So I just want to know if it's possible to make any program you fully desire with out pointers or any other complex OOP such as encapsulation and typedefs.

I just can't work with that logic. It ruins my whole programming skills, but my EXACT question is can it be done?

Because I honestly feel like I'll never understand them(it's been years by the way).

If you want to feel like you understand things, quit programming immediately. I like to say that any decent programmer will be in a constant state of confusion.

my EXACT question is can it be done?

I'll say yes with hesitation, because while you can, it's far from practical except in the smallest of programs.

If you want to feel like you understand things, quit programming immediately. I like to say that any decent programmer will be in a constant state of confusion.


I'll say yes with hesitation, because while you can, it's far from practical except in the smallest of programs.

All I needed to know is if it's done. Sorry, but as far as I go and programming, I will not do some thing I don't fully understand or at least 90% understand.

I'll feel too unprofessional working with things that are making no absolute sense to me in every way, so I won't use those methods.

I'll just use any kind of methods I can understand and work with. Shame it may be for me, but pointers are still confusing to the point where I can not know where or how or when to use them with any thing in any possible case.

That's your choice, though avoiding things one doesn't understand rather than attacking them with gusto strikes me as a stupid and cowardly way to live.

You just told me that if I feel like I understand things, quit programming.

And constant state of confusion? My point all along was to understand ALL of C++'s uses and functions and validate whether or not they'd be 100% useful in every outlook and instance.

I never learned pointers and still don't know them enough because no one explains or gives good enough examples to a point where I can understand its' full use to the max in any given instance it may be needed.

If no one gives me the answer to the previous question then I will, inevitably, never learn how to use pointers and not because my inability but because no one has ever given a good enough example of how, when or why they'd need to be used and what it is they do exactly that makes them required in cases and how exactly that required case becomes efficient with them but not with out them.

Like in Windows API for example.....

What part of it confuses you? Instead of trying to avoid it, why not try to understand it, even if its just a little more. And remember we're here to help.

You just told me that if I feel like I understand things, quit programming.

If you want to feel like you understand everything, quit. Because it's not going to happen. Not only is our field simply too complex to fully grasp, it's constantly evolving. There's simply no way to keep up and know it all at the same time.

My point all along was to understand ALL of C++'s uses and functions and validate whether or not they'd be 100% useful in every outlook and instance.

That's unrealistic.

I never learned pointers and still don't know them enough because no one explains or gives good enough examples to a point where I can understand its' full use to the max in any given instance it may be needed.

That's also an unrealistic expectation. The learning process is cumulative and slow. You need to be comfortable with figuring out a little bit here and a little bit there. With your current attitude toward education, you've set yourself up for failure.

What part of it confuses you? Instead of trying to avoid it, why not try to understand it, even if its just a little more. And remember we're here to help.

Okay, if you're actually going to help instead of judge or be biased on gender, I'll give a code example.

#include <windows.h>
#include <gl/gl.h>
LRESULT CALLBACK WndProc (HWND hWnd, UINT message,
WPARAM wParam, LPARAM lParam);
void EnableOpenGL (HWND hWnd, HDC *hDC, HGLRC *hRC);
void DisableOpenGL (HWND hWnd, HDC hDC, HGLRC hRC);
int WINAPI WinMain (HINSTANCE hInstance,
                    HINSTANCE hPrevInstance,
                    LPSTR lpCmdLine,
                    int iCmdShow)
{
    WNDCLASS wc;
    HWND hWnd;
    HDC hDC;
    HGLRC hRC;        
    MSG msg;
    BOOL bQuit = FALSE;
    float theta = 0.0f;
    wc.style = CS_OWNDC;
    wc.lpfnWndProc = WndProc;
    wc.cbClsExtra = 0;
    wc.cbWndExtra = 0;
    wc.hInstance = hInstance;
    wc.hIcon = LoadIcon (NULL, IDI_APPLICATION);
    wc.hCursor = LoadCursor (NULL, IDC_ARROW);
    wc.hbrBackground = (HBRUSH) GetStockObject (BLACK_BRUSH);
    wc.lpszMenuName = NULL;
    wc.lpszClassName = "GLSample";
    RegisterClass (&wc);
    hWnd = CreateWindow (
      "GLSample", "OpenGL Sample", 
      WS_CAPTION | WS_POPUPWINDOW | WS_VISIBLE,
      0, 0, 256, 256,
      NULL, NULL, hInstance, NULL);
    EnableOpenGL (hWnd, &hDC, &hRC);
    while (!bQuit)
    {
        if (PeekMessage (&msg, NULL, 0, 0, PM_REMOVE))
        {
            if (msg.message == WM_QUIT)
            {
                bQuit = TRUE;
            }
            else
            {
                TranslateMessage (&msg);
                DispatchMessage (&msg);
            }
        }
        else
        {
            glClearColor (0.0f, 0.0f, 0.0f, 0.0f);
            glClear (GL_COLOR_BUFFER_BIT);

            glPushMatrix ();
            glRotatef (theta, 0.0f, 0.0f, 1.0f);
            glBegin (GL_TRIANGLES);
            glColor3f (1.0f, 0.0f, 0.0f);   glVertex2f (0.0f, 1.0f);
            glColor3f (0.0f, 1.0f, 0.0f);   glVertex2f (0.87f, -0.5f);
            glColor3f (0.0f, 0.0f, 1.0f);   glVertex2f (-0.87f, -0.5f);
            glEnd ();
            glPopMatrix ();

            SwapBuffers (hDC);

            theta += 1.0f;
            Sleep (1);
        }
    }
    DisableOpenGL (hWnd, hDC, hRC);

    DestroyWindow (hWnd);

    return msg.wParam;
}
LRESULT CALLBACK WndProc (HWND hWnd, UINT message,
                          WPARAM wParam, LPARAM lParam)
{
    switch (message)
    {
    case WM_CREATE:
        return 0;
    case WM_CLOSE:
        PostQuitMessage (0);
        return 0;

    case WM_DESTROY:
        return 0;

    case WM_KEYDOWN:
        switch (wParam)
        {
        case VK_ESCAPE:
            PostQuitMessage(0);
            return 0;
        }
        return 0;

    default:
        return DefWindowProc (hWnd, message, wParam, lParam);
    }
}
void EnableOpenGL (HWND hWnd, HDC *hDC, HGLRC *hRC)
{
    PIXELFORMATDESCRIPTOR pfd;
    int iFormat;
    *hDC = GetDC (hWnd);
    ZeroMemory (&pfd, sizeof (pfd));
    pfd.nSize = sizeof (pfd);
    pfd.nVersion = 1;
    pfd.dwFlags = PFD_DRAW_TO_WINDOW | 
      PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER;
    pfd.iPixelType = PFD_TYPE_RGBA;
    pfd.cColorBits = 24;
    pfd.cDepthBits = 16;
    pfd.iLayerType = PFD_MAIN_PLANE;
    iFormat = ChoosePixelFormat (*hDC, &pfd);
    SetPixelFormat (*hDC, iFormat, &pfd);
    *hRC = wglCreateContext( *hDC );
    wglMakeCurrent( *hDC, *hRC );

}
void DisableOpenGL (HWND hWnd, HDC hDC, HGLRC hRC)
{
    wglMakeCurrent (NULL, NULL);
    wglDeleteContext (hRC);
    ReleaseDC (hWnd, hDC);
}

Okay, now in the code I just gave, could you explain, even if shortly, why all the references to pointers are for? And how their exact use is impacted upon the use of Windows API and OpenGL?

If you want to feel like you understand everything, quit. Because it's not going to happen. Not only is our field simply too complex to fully grasp, it's constantly evolving. There's simply no way to keep up and know it all at the same time.

You clearly didn't get what I meant. I want to understand things ENOUGH to the point where I will not be confused. I know things change, evolve and all, but the point is that I can keep learning new things and setting myself to the level in which I'm straightforward on doing whatever work I want and NOT having to be confused with things as I do as I will with programming.

I know it's impossible to keep up and know it all, but knowing most of the functions that "I" will be using will be necessary and will have to be to a deep understanding in whatever is relevant to whatever I need to do, am doing or want to do.

You can't determine if a feature is useful to you without understanding it. :icon_rolleyes:

You can't determine if a feature is useful to you without understanding it. :icon_rolleyes:

Again, back to my point. If I don't understand it first I can't do any thing with it, basically. But that's solely due to the fault of people giving terrible tutorials and articles on how pointers work when they don't provide good enough details inside and out on how or where they'd ACTUALLY be used and WHEN.

And I can determine if it'd be useful to some extent with out knowing about it, but the more I know the more I can judge. :)

You're in a tough situation. You refuse to learn unless you know something is useful to you, but you can't know if it's useful without learning it. And it's clearly not your fault, but the fault of those horrible people who just won't teach you properly. :icon_rolleyes:

Dude, you're a hoot. I officially invite you to Daniweb's IRC channel so that you can entertain us in realtime chat.

The point Narue is trying to make is that you have to actually try to do something with a function/feature/component before you can begin to understand it. It's called practice. You'll never learn anything if you don't step into the unknown once in a while.

There is no magic crystal ball that you can look into to see if you will ever use something or not. You're far better off learning it and not needing it than never learning it and later finding out that you need it.

Also, "Is it possible to work with out pointers and some OOP in C++?"
Have you ever used an array? An array is a type of pointer.

Have you ever used a vector? A vector is an object that contains several instances of a type of data.

Study how arrays and vectors work and you'll start to understand pointers and OOP.

About pointers, there are two "alternatives" (the quote marks mean that they really can't completely replace pointers, but can significantly reduce the need for them).

First, a very popular design pattern in C++ is RAII (Resource Allocation Is Initialization). This just means that when an object is created it initializes the resources it needs and when it gets destroyed it frees them. I warn you, this relies on encapsulation. A simple example is the std::vector class in STL versus C-style dynamic arrays which use raw pointers. Consider those two equivalent codes:

int main() {
  std::vector<int> my_array;  //creating the object my_array also initializes it to a valid state.
  my_array.resize(50);        //requesting my_array to have a new size of 50
  for(int i=0;i<50;++i)       //looping around to set the values
    my_array[i] = i;
  return 0;
};                            //here, my_array gets out-of-scope -> destroyed -> frees the array of integers

int main() {
  int* my_array;           //creating a raw pointer with invalid state
  my_array = new int[50];  //allocating memory for 50 integers and storing its address in my_array
  for(int i=0;i<50;++i)    //looping around to set the values
    my_array[i] = i;
  delete[] my_array;       //deleting the array manually is required to avoid a memory leak.
  return 0;
};

You see, because std::vector is uses the RAII principle, the first piece of code is much superior to the second because there are no point where an invalid state exists and the user of std::vector doesn't have to worry about memory leaks (or more generally, the management of the resource, e.g. the array of integers, is encapsulated in the vector class). By making great use of RAII, you will typically end up with code that makes very little use of pointers or other low-level resource descriptors (or, at least, you can wrap the code that deals with the low-level resource in a RAII-style class and have the rest of the code only deal with that class).

Second, if you are doing OOP, then there is the problem of storing polymorphic objects (e.g. hold a list of base class pointers that actually point to objects of derived classes). This "requires" pointers, but they don't require "raw" pointers. The use of smart pointers can make this job a lot easier. Raw pointers have a classic problem which is all about semantics. Consider this simple class:

class Foo {
  private:
    Bar* ptr;
  public:
    Foo(Bar* aPtr) : ptr(aPtr) { };
};

Now, is the above erroneous or not? It would be erroneous code if Foo is supposed to _own_ the object pointed to by ptr, because in that case, when Foo is destroyed, it should also delete the pointer ptr (so a destructor implementation would be missing from the above code). However, if Foo is supposed to _refer_ to the object pointed to by ptr, then the above code is merely bad style, but not erroneous. Because raw pointers are just pointers, they don't convey the information needed nor the behaviour required to distinguish these two cases, and this is why raw pointers are rarely used in good-style C++. Smart pointers come to the rescue by attaching semantics and behaviour to raw pointers. They come in three flavours with three specific meaning and use (you will find them either in the TR1 (std::tr1:: namespace) or in the Boost smart pointer library). A unique_ptr means that the pointed-to object has one and only one owner, and no referands, so, unique_ptr is not copyable (but moveable) and will automatically delete the pointer when it is being destroyed (goes out of scope or is part of an object that got destroyed). A shared_ptr means that the pointed-to object has multiple owners (who share ownership), so it is copyable (with reference counting) and will automatically delete the pointed-to object when all its owners have relinquished their ownership (i.e. disappeared). Finally, the weak_ptr means that the pointed-to object is not owned by the weak_ptr but merely referred to, and thus, a weak_ptr has to be transformed to a shared_ptr before accessing the pointed-to object and that is only possible if that object was not deleted yet (i.e. there are still some shared_ptrs around that own it). By using these types of pointers, you can basically get rid of the awkward semantics of raw pointers, and with that (and the automatic behaviour of smart pointers) you can avoid all the typical problems when using pointers (e.g. dereferencing an invalid pointer, leaving memory leaks, or managing ownership and life-times of the pointed-to objects).

The third thing to mention is the use of references instead of pointers. In many many cases, a raw pointer (or a smart pointer) is not necessary because a reference is much more appropriate. Like, for example, modifying a passed parameter in a function is a classic case where pointers are bad style:

void modifyInt(int& value) { //pass "value" by reference, such that it can be modified.
  value = 42; //this is an obvious expression, no confusion about what this does!
};
void modifyInt(int* value) { //this is an option, to pass by pointer, but it is bad:
  *value = 42; //because this is as nice and clear as with the reference
  value = 42;  //and, this is a valid expression, an easy typo to make, with bad consequences.
};

So, to answer your question:
>>Is it possible to work without pointers and some OOP in C++?
It is possible to significantly reduce the use of pointers (at least raw pointers) and do pretty much anything you like (I personally, cannot think of any situation where a raw pointer is actually the best thing to use, it is only if you are interfacing with C-style code or API that you might need them). As for OOP... well, technically speaking, you don't have to program in OOP in C++, you can just to procedural programming and it is Turing Complete, so, yeah, in theory, you can do anything you like that way. However, OOP is popular because it is about 1000 times more effective than procedural programming. Note, how RAII is based on encapsulation and how smart pointers use abstraction, these concepts of OOP are not evil complex things that are there to make your life miserable and complicated, they are there because they are very powerful and useful at making the code easier to use, understand, and maintain. You might want to read the C++ Coding Standards book by Alexandrescu and Sutter, they make it very clear why these OOP techniques (and others) are there to help you be a better programmer and produce better quality code at lower cost and time.


EDIT: I guess I took the original question too seriously (got thrown off by the "it has been years"). Now I see all the posts in this thread and I realize that the OP is only using this thread as a "I'm pissed because programming is not as easy and straight-forward as I thought". If you want control over what you do, you need knowledge, understanding and practice, none of those come _easy_, none of those can be _complete_, and the search for them will _never_ _end_, you have to accept that before you can even start to learn to program.

commented: I feel sorry for you. Your well-intentioned and excellent post fell on deaf ears. +25
commented: Excellent post in a strange thread +4
commented: *copy/paste Narue's reputation* +3

Honestly, the only result I see coming from this thread is quitting, because that's what I'm basically going to have to do.

I felt that programming was to be more fun and involved hard work but work I could understand and use.

It's just too complicating and too many different ways and angles things can go and I just can't work with it.

I guess I'll have to do some thing else which I'll have full control over.

NOTE TO THE LAST POSTER BEFORE THIS ONE: You overdid it. You went on and on about how it works and just gave me another headache. Sorry, but your "explanation" is not good enough for me to get.

And I have practiced, but honestly, still don't get it enough.

Some people aren't cut out for programming. You might be one of those people. Or you might not, and you're just being melodramatic. We're our own worst critics as well, so it's possible that you understand more than you believe and hold yourself back by being too hard on yourself.

If you want control over what you do, you need knowledge, understanding and practice, none of those come _easy_, none of those can be _complete_, and the search for them will _never_ _end_, you have to accept that before you can even start to learn to program.

A lot of the programming field of work (like many scientific fields of work) is about what you don't know, what you don't understand, and what you have little practical knowledge of, these are the things that should tickle you into wanting to seek to fill those gaps. If they are truly frightening you (as opposed to exciting you, like it does for me), then it might not be the best place for you.

If you want control over what you do, you need knowledge, understanding and practice, none of those come _easy_, none of those can be _complete_, and the search for them will _never_ _end_, you have to accept that before you can even start to learn to program.

A lot of the programming field of work (like many scientific fields of work) is about what you don't know, what you don't understand, and what you have little practical knowledge of, these are the things that should tickle you into wanting to seek to fill those gaps. If they are truly frightening you (as opposed to exciting you, like it does for me), then it might not be the best place for you.

What I'd really want to do is to create machine architecture and instructions from scratch(like processing for a system as a whole).

Like if I had a programming language "I" could understand then I would love doing it. If I have to work with other things that "other" people knew and made then I'd have a harder time.

That's why I want to one day make my own system myself that does things the way only I want it to. Then I'd generate things how I can understand them. Get it?

Microsoft did it, so it's not impossible.

That way no one can say Windows is by Microsoft which made this or that....I could say I made it, I understand it and I can work with it any way I WANT.

I guess until that day comes I won't be happy with programming.

I guess until that day comes I won't be happy with programming.

Sucks to be you then.

Hi Spoons. Still chasing pointers, I see. It's been months now, and you've not got anywhere. Maybe it's time to take a break and come back to it in a few months.

Sucks to be you then.

Are you saying I couldn't possibly manage to create computer hardware myself and have that hardware only known by myself and use it to my advantage in every way?

From then on I could hard-write instruction sets for a programming language which can be used to maintain an OS, devices and drives and GUIs and APIs. And from THEN on I could program happily.

Dude, you're a hoot. I officially invite you to Daniweb's IRC channel so that you can entertain us in realtime chat.

Well, this thread has had one use... I didn't know there was a DaniWeb IRC!

Every now and then we post the login information, but nobody really wants to hang out there because it's not terribly active. The more people that hang out, the more active it will be!

What I'd really want to do is to create machine architecture and instructions from scratch(like processing for a system as a whole).

Like if I had a programming language "I" could understand then I would love doing it. If I have to work with other things that "other" people knew and made then I'd have a harder time.

That's why I want to one day make my own system myself that does things the way only I want it to. Then I'd generate things how I can understand them. Get it?

Microsoft did it, so it's not impossible.

That way no one can say Windows is by Microsoft which made this or that....I could say I made it, I understand it and I can work with it any way I WANT.

I guess until that day comes I won't be happy with programming.

If that's your ultimate goal and opinion of the industry, it's going to be a VERY rocky road for you getting there. As some point you'll have to learn to deal with lack of understanding and a certain level of uncertainty.

If that's your ultimate goal and opinion of the industry, it's going to be a VERY rocky road for you getting there. As some point you'll have to learn to deal with lack of understanding and a certain level of uncertainty.

As long as I KNOW for a fact that I will gain the ability to create a processor myself and system as a whole comparable to Windows and my own programming languages that can deliver quality equal to this language and other graphics libraries(which I'll have to make)then I'll be happy struggling through it all.

At least then I'll get some where and move on rather than stay stuck with f------ pointers.

If you can't understand pointers, I fear that the fields of advanced mathematics, electrical engineering, solid state physics and all the rest needed to craft a new processor comparable to a modern PC processor are also beyond you.

By comparison to those fields, pointers are trivially simplistic.

If you can't understand pointers, I fear that the fields of advanced mathematics, electrical engineering, solid state physics and all the rest needed to craft a new processor comparable to a modern PC processor are also beyond you.

By comparison to those fields, pointers are trivially simplistic.

To each their own is all I'm going to say. Just because I can't fully understand a pointer, which is a specific thing used in computer systems, doesn't mean I can't create a whole new idea similar to it in a different method altogether. Look at some geniuses in our history....Some couldn't do "simple" things but apparently changed the world....Not saying I'm a genius, which I'm not, but I have ideas.

My lack of pointer knowledge doesn't guarantee inability to make a processor.

>>Are you saying I couldn't possibly manage to create computer hardware myself and have that hardware only known by myself and use it to my advantage in every way?

Sure, you can create your own computer hardware. All you need is:
- a Ph.D. in semi-conductor physics and integrated circuits
- a grade 5000 clean room
- a silicon furnace and surface condenser
- high-precision etchers
- printed circuit board manufacturing facility (at least 16 layers)
- a Ph.D. in Thermodynamics, Heat Transfer, and Probabilistic Quantum Physics
- ....
- and 20 life-times!

You said "Microsoft did it with Windows so I can too". First of all, all microsoft started with was DOS, which was heavily inspired by Unix systems of the time. They didn't build computers, they didn't even come up with the programming language, not even a compiler. All they did, at first, was a small OS that used already existing platforms and standards (made by _other_ people). And, most importantly, it became a "good" OS because people found it practical and that grew there company, and with the combined expertise of all their employees, they developed increasingly complex software solutions and OSes.

The important thing here is "combined expertise". No one, in the field of computers or any other field in general, knows everything (and usually understands much less). It is all about trusting that the people who have expertise in one area can do their job well so that I don't have to care about that part of the problem. I trust that Intel and AMD people are really really good at manufacturing processors (and the ad-joint instruction sets) because they have been doing only that for decades. So I know, that there is no point for me to try and design a processor, because I know it will not be any better than theirs. Just like there is no point for me to try to make my own programming language, because I am humble enough to realize that I could not come up with anything better than C, C++, Python, etc.. However, there is a very narrow, specific field in which I think I can do better than those before me, and that is what I concentrate on. You have to figure out what you're most interested about, what you think you can contribute to, and concentrate on that (and learn to use the tools that you need).


>>From then on I could hard-write instruction sets for a programming language which can be used to maintain an OS, devices and drives and GUIs and APIs. And from THEN on I could program happily.

You are going to be a very busy and very lonely person. For example, if you were to invent your own language (as in "natural languages" like English or French), it would be of no use, because nobody would understand you, and that's the whole point of languages. The reason why there are standard programming languages, standard OSes, standard devices, standard plugs, standard APIs, standard every-freaking-thing, is because all these things are only as useful as they have people (or companies) using them, and the best way to make a product into a widespread product is to make/pick a standard and comply to it.

If its worth anything, OP, I was in a similar train of throught when I first started learning programming also. For example, I didn't want to use std::vector because I didn't want to use other people's code. But I realized that, that train of thought will leave you learning for the rest of you life, and make no use of the learning. Eventually, you need to let go of this, "if I didn't create it, I don't want to use it"" ego. For example, because you don't know C++, you don't want to use C++. And to create C++ you need to know a hell of a lot of crap. And then say you for some reason you started to create another language, well that hell lot of crap uses other hell lot of crap, and it will be a never ending cycle to you, until you finally, start creating your own hardware in your basement, possibly using tools that you might know how it works and then there goes the cycle again.

As for your pointer question in previous thread( with opengl context ). The only reason why there is pointer in that code, is so that it when passed in the function, it can get modified, as mike explained it already.

Maybe you should just learn other language first, before diving into advanced C++. I suggest reading this, it gives nice perspective how to learn programming. If you decide to still program in C++, you will find yourself in position, where pointers, or other elements of this language, are the best solution, and it will come natural to you.

commented: nice article.. I totally agree! +1
Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.