This is not homework; this is for part of a project I thought of starting... will not give details.
There are four groups ( sort of like jobs or colleges ): A, B, C, and D. In the project, there will be many more, but for now let's assume four. People can apply to any number those groups, and the groups can reject or admit them. Person 1 got in to A, B, C, D. Person 2 got into A, B, C, but not D. Person 3 got into A, B, but not C, D. Person 4 got into A, but not B, C, D, E. Obviously, A is least selective, followed by B, C, and lastly D. How does a computer figure this out when there are any number of groups, and any number of people who applied to any number of groups?
Thank you in advance.

It checks the number of times a person has gotten in a specific group?

@zeroliken
It could, but people don't have to apply to all groups.

how about that it checks cases whether a person has applied on a specific group but not the others then tally the results and compares the number of times a person has gotten in a specific group with or without the other groups. Then it compares this result with the other groups

That would take a lot of space! Is there no other way? I already considered making a table and doing that. If I am correct, with 300 groups, we would have about 10^158 different possibilities. Help!

Try turning it around. Keep track of the number of times a group has been applied to, compared to it's count of admitted members. Then take a ratio. Lowest ratio = most exclusive.

@BitBit
thats still a percentage
@zeroliken
if u think of anything else, tell me.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.