Starting with same old sequential for loop.It will iterate through a slice if integers , sum it and print.
Note : I’m running this on OS X 10.9,Core i5,4GB.Running on playground might give different time elapsed, probably 0.
Here Time taken in my Mac : 12.844223ms
Making it parallel (precisely concurrent), using function literal
Time taken : 13.199926ms
Slight increase in time, but it will depends on hardwares.What I want to prove is simply using Go routines won’t yield maximum throughput.
Let us move to semi parallel for loop.Concept is to divide data into chunks and use Go routines per chunks :
Time is 15.217us -far less than above two
Semi parallel for loops ,which is decided dynamically (size of chunk) will get maximum throughput.If data processing requires a merging after processing data(reduce part in map reduce),usesemaphore channels and have another parallel merge function.
I’m running it on Sublime Text when log.Println() always after fmt.Priintln(),for Playground ,need scroll down result to get time printed
Originally published at kkfaisal.github.io.