The Quadro is built to be feature-rich as opposed to being optimized for speed and efficiency (battery life comes to mind). That's the trade-off between a GPU and it's "Quadro" version. In the case of the "9600" and "770" you're talking the "G96M" chipset. The difference here is somewhat analogous to the difference between a Pentium 4 and a Pentium 4 Xeon...larger caches and more features on the chip, but built around the same architecture.
The main difference specifically is the optimization for speed versus full rendering capabilities, which is enabled in the firmware and drivers for the card. In Quadro cards you are guaranteed to do the full rendering of a scene, at any performance expense. They do not cut corners in the firmware and drivers, but as a result the chip will consume more power and slow down when complex scenes are rendered. For most professional situations this is fine because quality is expected as opposed to real-time rendering.
As such, if you are a graphics designer and absolutely need the features offered by the Quadro (basically the guarantee of full-quality scene rendering), then you will not find that on Apple's laptops (yet?). However, many of the recent advancements in GPU and driver technologies have made the differences less apparent for many uses. For instance, it used to be that the FireGL and Quadro GPUs were the only ones that were fully OpenGL compliant. These days, most GPUs (consumer or "pro") are fully OpenGL compliant because games and home applications demand it.
The real question to ask is what specifically are you doing? If you're just throwing together concept models and simple to moderately complex scenes, then any GPU will do, but if you are rendering a production movie or feature-rich scene, then the Quadro will ensure no speed-optimized corners are cut and you wont run into the potential for clipped textures and non-smoothed surfaces, or similar cut-backs that are usually not noticed in a real-time rendered scene.