The search algorithm is pretty trivial, that's why you always get back to the "Binary Search Tree". Once you have a binary search tree constructed, searching for a particular element or range is a simple matter of recursively going down the branch which should contain the sought-after value depending on whether it's less-than (go left) or not (go right).
The real trick is constructing that tree, and especially, maintaining it (i.e., dynamically inserting and removing elements from it). This is where different binary search tree methods differ, like mainly the AVL trees and the Red-Black trees. This is where you would add some bits of information and/or design special algorithms to keep the tree reasonably balanced (all leaf nodes at roughly the same depth) after an insertion or deletion. If the tree cannot be kept balanced, the performance is going to degrade, and also, using a recursive search algorithm would start to be dangerous (stack overflow issues).
And then there are other kinds of binary search trees that are more complex because they don't involve a single-dimensional quantity (e.g., like a set of numbers or words in alphabetical order). But all the same principles apply, it's just that everything is more complicated, including the search algorithm (but still pretty straight-forward, compared to the create / insert / remove algorithms).
And then there are different storage methods (e.g., linked-structures, compact layouts, cache-oblivious layouts, or hybrids). But that is a separate issue (memory locality).