I have a modest select statement that returns about 7k small rows, maybe 200k of data.
When I run it (in SSMS) it costs about 1.5 seconds of CPU.
If I wrap the select in a count(*) so it just returns a count and not the rows, it costs only 1.0 seconds of CPU.
So, is this because it costs a little CPU even to send rows across the network to the client, or does it cost less because SQL optimizes the query because it knows I don't want the rows back? I'm curious about this because other queries and SPs send a LOT of data back, and if that costs CPU I'd like to be sure of it.
Thanks.
Josh