>>Ok, so time() function (argument being null) returns what?
It returns the number of seconds since 1970, as you previously posted. It doesn't matter whether the parameter is NULL or not, it will always return the same value.
>>Ok, so time() function (argument being null) returns what?
It returns the number of seconds since 1970, as you previously posted. It doesn't matter whether the parameter is NULL or not, it will always return the same value.
It returns the number of seconds since 1970
Perhaps on your implementation. Here's the specification for the time() function in its entirety:
7.23.2.4 The time function
Synopsis
1 #include <time.h>
time_t time(time_t *timer);Description
2 The time function determines the current calendar time. The encoding of the value is
unspecified.Returns
3 The time function returns the implementation’s best approximation to the current
calendar time. The value (time_t)(-1) is returned if the calendar time is not
available. If timer is not a null pointer, the return value is also assigned to the object it
points to.
Unspecified means the implementation can do whatever it wants as long as the choice is documented. Nowhere in the standard is "1970" or "epoch" mentioned. Further, because the representation of time_t is unspecified, the result of time() need not be in seconds. An implementor can choose to make time_t the number of Twinkies sold in the fiscal year as long as the other date/time functions behave as specified.
The implementation of time() is the same on both *nix and MS-Windows, which I think we can safely assume will be the operating system of choice for 99% of the members here at DaniWeb.
The implementation of time() is the same on both *nix and MS-Windows
Prove it. The implementation of the standard library can vary depending on the compiler, of which there are many on both systems. Further, the default library for a compiler can be replaced by another third party library. So what you're saying is that you've seen every implementation of the standard library and confirmed that they all use the same internal representation of time_t?
I couldn't care less if you choose to make unwarranted assumptions in your own code, but I strongly disagree with misleading beginners. Please teach reality instead of some idealistic fantasy, and let them make their own choice rather than forcing it upon them.
>>Prove it.
Its up to you to disprove it. Show me an implementation of MS-Windows or *nix where its not true and I'll gladly eat my words. Although you are technically correct there is no point in arguing about something that doesn't exist.
I am not clear as to what a seed is
In the case of rand(), you can think of the seed as the starting point for generating pseudorandom numbers. Since pseudorandom numbers are deterministic, there has to be some initial value for the algorithm.
Its up to you to disprove it.
I already did. The standard requires nothing of the sort. In fact, the standard goes out of its way to make the representation unspecified. Seeing as how I could fairly easily disprove your claim by writing my own implementation that does something different, I wouldn't recommend putting the ball in my court.
>>I am not clear as to what a seed is
I think the easiest way to think about it, from a beginner's perspective, is that, since a computer is deterministic (it can't actually produce truly random numbers), the best it can do is use a formula that produces "very wacky" results. In other words, the numbers generated by the formula make a very jumpy sequence of numbers where each number doesn't really seem (to the naked eye) as having anything to do with the previous one. But this is still a deterministic sequence, because if you start at the same place in the sequence, you will always get the same sequence of following numbers. The seed is, simply put, where you start in that sequence. And to get "random" number sequences, you need to start with a "random" seed. Since the exact time at which a program is started is pretty unpredictable and arbitrary, it's a fairly good "random" number to start with.
@AD:>>Its up to you to disprove it.
No. You are claiming that all implementations of time() are the same, you have the burden of proof (a pretty heavy one too).
@Narue:>>I wouldn't recommend putting the ball in my court.
Well said!
>>No. You are claiming that all implementations of time() are the same
Where did I say that? I just said *nix and MS-Windows. Show me a compiler that has some other implementation on either of these operating systems. The C standards say time() returns time_t, and that time_t is a numeric data type represented by size_t. Section 7.17 states that size_t is an unsigned integer type. So according to standards it is not possible for time() to return anything other than an unsigned integer type. Now, how that unsigned integer is constructed is not specified but its common practice for it to be the number of seconds since epoch. Borland, Microsoft, IBM, and GNU compilers all encode it as the number of seconds since 1 Jan 1970.
7.23 Date and time <time.h>
3 The types declared are size_t (described in 7.17);
clock_t
and
time_t
which are arithmetic types capable of representing times;
and
struct tm
7.23.2.4 The time function
Synopsis
1 #include <time.h>
time_t time(time_t *timer);
Description
2 The time function determines the current calendar time. The encoding of the value is
unspecified.
Returns
3 The time function returns the implementation’s best approximation to the current
calendar time. The value (time_t)(-1) is returned if the calendar time is not
available. If timer is not a null pointer, the return value is also assigned to the object it
points to.
The C standards say time() returns time_t, and that time_t is a numeric data type represented by size_t.
The standard says nothing about time_t being synonymous with size_t.
His assumption was a calculated risk that worked out.
Calculated risks are simply a part of the business. My problem is that AD took his assumption and made it seem like a hard fact when teaching a beginner. Now the calculated risk becomes an uncalculated risk because the one taking it doesn't know there's a risk. See the difference?
>>The standard says nothing about time_t being synonymous with size_t.
Funny -- I posted the quote from the C standards. 7.23 "The types declared are size_t"
@AD
The phrase "the types declared are" comes in many places. There is a semi-colon after size_t at 7.23 also. I understand that English might not be your primary language, so let me clarify. The phrase "the types declared are" starts an enumeration of types that should be declared within the header file to which the section of the standard pertains. The enumerated elements are separated by semi-colons. If you notice, size_t appears under the headers stdlib.h, stdio.h and time.h. It simply means that the size_t type should be declared in all these headers, such that if one includes at least one of these three headers, the type size_t will be defined. It has absolutely nothing to do with time_t which is just another, separate element of the enumeration under 7.23. If you thought that section 7.23 somehow implied that time_t must be declared as a size_t alias, you misunderstood the phrasing used (I admit it's not the clearest sentence, but the context makes it clear when you take a closer look at it).
Oh yes -- I see it now. I mis-read it. I was assuming time_t was defined as size_t, which it is not. That's what I get for trying to read the standards -- and arguing a point with Narue :)
The fact remains -- show me a compiler (*nix or MS-Windows) where time_t does not represent the number of seconds since 1 Jan 1970. I don't think one exists. Therefore this discussion is all just academic.
It returns the number of seconds since 1970, as you previously posted. It doesn't matter whether the parameter is NULL or not, it will always return the same value.
Although the c standard may be fuzzy on the topic, the posix standard appears to be more specific. Although exception are possible, common sense dictates if implementation didn’t follow convention, then interoperability is compromised; you’re both right, just looking at it from different perspectives.
http://en.wikipedia.org/wiki/Unix_time
http://vip.cs.utsa.edu/classes/cs3733/recitations/recB/usp302.pdf
GNU C allows for a -1 return value if the implementation does not provide a way to obtain a system time.
POSIX allows for time_t to be a floating-point value.
QNX gives the same specification as GNU C, because it is intended for embedded systems which often have very different methods to access the system time (if it has it at all).
Similarly, many embedded systems' standard libraries / compilers / OS do not provide time functions at all. For example, Windows CE's CRT.
You may say that using embedded systems to provide examples of implementations that won't behave "normally" is a bit of a cheat since very few embedded systems are even remotely standard compliant, well, you asked for examples, you got some.
It probably wouldn't be all that surprising either if older Windows or DOS implementations (still post-ANSI-C) would use the MS DOS epoch of Jan 1st 1980.
You may say that using embedded systems to provide examples of implementations that won't behave "normally" is a bit of a cheat since very few embedded systems are even remotely standard compliant, well, you asked for examples, you got some.
I asked for examples of *nix and MS-Windows. Said nothing about embedded because I already know they are different, and many of them don't support time functions. Many more embedded don't even support C language programs.
We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.