That's basic math. 2.3 Kbps = 2.3/8 KB per second = 0,2875 KB per second. Now divide 200M by 0,2875 K and you know the answer: nearly 70.000 seconds is nearly 20 hours (there are 3600 seconds in an hour).
Kbps, by the way, isn't a time value, so it's not a throughput time either. It's just a througput.
If you want to know the difference between TCP and UDP, why not measure it yourself?
Given that we have for example a 200Mbytes file and our networks provides throughput time of 2,3Kbps how can I calculate the time needed for the whole file to be sent? In a repetitive system (server) if that matters/makes sense. Is there a difference beetwen TCP and UDP considering only the part of time needed in sending the file, not establishing the connection.
Thank you.

Chowhound
Comic Vine
GameFAQs
GameSpot
Giant Bomb
TechRepublic