Hello- I'm using VB.NET (visual studio) to create a chatbot (a program that mimics a human and has a text converstion with you). Two of my main classes are a word class and a sentence class. The sentence object is basically going to be a linked list of words objects.

Here's my question: as the program executes, many word objects will be instantiated, and so will many sentence objects. But, each class (used to create the object) has several constants, procedures, and functions defined. For each object instantiated, are new copies of the constants, procedures, and functions created in memory? Or, is only one copy of the constants and procedures created?

I hope my question is clear. My fear is that, since many many objects will be created during program execution (each word of a sentence will be an object), that if the constants/procedures/functions are defined in memory again and again, that I'll run out of memory.

If this is the case, what is the best way to structure my program so that the objects created contain only the variable data, and the constants/procedures/functions are defined only once?

I am not sure how well this will answer your question, but unused objects are handled by the garbage collection mechanism. If the objects no longer have references in the code, they are destroyed. There is no real way to tell when an object has been destroyed though, so an object will sit in memory until the garbage collection process has located the object and destroyed it.

You can put in this line of code to trigger the garbage collection process:

System.GC.Collect()

I understand garbage collection, but all of these word objects will be in memory and referenceable (made up my own word there) throughout the entire course of the program. I'm designing it that way for specific reasons.

So really what I'm asking is, is that for any object created that has a number of functions, procedures, and maybe constants defined - are these functions/procedures/constants taking up memory for each object defined? For instance, if a word object has the function "getWord()" defined in memory, if I create 50 word objects, will "getWord()" be defined in memory 50 separate places?

So really what I'm asking is, is that for any object created that has a number of functions, procedures, and maybe constants defined - are these functions/procedures/constants taking up memory for each object defined? For instance, if a word object has the function "getWord()" defined in memory, if I create 50 word objects, will "getWord()" be defined in memory 50 separate places?

Yes

Ok, so that doesn't sound very efficient. How, then, can an object that is used hundreds, or even thousands of times whose class has a long list of procedures and/or functions, be efficient? It sounds to me that by encapsulating procedures, constants, or whatever inside a class that has thousands of instances, there's a lot of unnecessary repetition of code?

Am I overestimating how much memory is actually used? Even with thousands of objects whose procedure definitions are stored in memory again and again, is it maybe still not a lot of memory being used?

Or am I hitting upon one of the weaknesses of the object-oriented approach?

I actually agree with you. I definitely think OOP has its place but for the most part I limit it to things such as needing multiple instances with different data at the same time. If its a case of only needing things one at a time, a regular module file may be more helpful where you can limit the objects/variables life span to the running of the individual function/sub that is called.

Other then that the best you can do is attempt to limit the lifespan of data, such as creating only what is needed and destroying as soon as done with it. I think adding an IDisposable method to dispose of your objects after each use is probably the best you can do as far as cleanup. Even calling the garbage collector doesnt actually mean it will be released as soon as you call it.

If you want to run an experiment of creating objects and see how lousy the memory usage is handled. Create a simple blank project where the only thing the main form in a button click event is creates an instance of a new form and shows it. Run your task manager and keep opening forms and see how it just accumalates in memory. Then try adding the garabage collector to clean up after you close the form and you wont see much difference.

After that change the call from showing the form to showdialog. Now for some reason the Form.Show does not have a disposable method to it but the ShowDialog does. Set it to dispose after you close the form and see the difference.

Oddly (although this may just be a case of improper results from the task manager) the only time you really see the results completly cleared is when you minimize the project window to the task bar...

I happened to come across this post while Googling on a related topic, and I felt compelled to correct the response provided by "TomW."

Going back to your original question, the answer is "NO." Defining functions (more accurately "Methods") on an object will NOT cause those methods to be repeated in memory based on the number of instances of that object that you create. This goes for any .Net method (Function, "Sub" [in VB], Property, etc) and also for any constant, or any "Shared" member variables.

A Method will be "Jit-ed" (compiled) only once and the resulting code will be stored in memory only once (per AppDomain - though there are exceptions). Once Jit-ed, a pointer to that Method is stored in the Type information for your object. That same Method compilation is used for all object instances that you create. This is true for all OO languages that I know of, .Net included.

Memory will obviously increase with every object that you create based on the non-shared member variables that you have defined in your class, but I think you already knew that. Feel free to create as many functions as you want.

The reason why Methods (et al) don't add to memory consumption per object instantiation is because OOP is a coding abstraction invented to make our lives easier. When our OOP code is ultimately compiled into machine code, it is purely procedural. When you call a Method on one of your objects, what actually happens is that some registers are being set that point to the memory location where your instantiated object's data is stored (the member variables), and then a GOTO is called pointing to the memory location of your Method. There's no need to make separate copies of the method.

Despite what TomW says, OOP always has its place.

As far as what he says about the test with opening forms and garbage collection, I'm not clear on what he is attempting to say. The issue of ShowDialog and Show and how it relates to Dispose is a separate topic. Calling Dispose on any object does not mean anything will be cleared from memory. This would only be the case if Dispose were releasing un-managed memory.

Instead of his test, create an array of 1,000,000 objects and look at the memory usage. Then de-reference (set to Nothing) that array and call GC.Collect() and look at the difference - you'll see one.

As far as minimizing your application, this isn't actually releasing memory used by your application. It is only paging the working set of your application out to disk. Once you start doing things in your application again, that information, will need to be reloaded. Here's an explanation:
http://support.microsoft.com/default.aspx?scid=kb;en-us;293215
The Task Manager can be misleading since it is reporting the "Working Set."

Good answer, thank you. That makes sense. I was surprised to hear Tom W's answer, but couldn't find any other good answers on the web. If OO languages were to be implemented in the way I had originally thought, that sounds like a major weakness and a major use of memory.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.