What exactly does it mean when an algorithm has a run time of Omega(nlgn) or Theta(n^2), etc. From what I can tell, Big O is used to show the worst case(?) while Omega is best(?) and Theta is either(in between O and Omega?. I'm having a hard time understanding this topic. The reason I ask is because the book would ask questions like :
"Show that the running time of QUICKSORT is Theta(n^2) when the array A contains distinct elements and is sorted in decreasing order."
or
"What is the running time of QUICKSORT when all eleents of array A have the same value?"
How do you solve problems like these?