Performance of Swift vs. Objective-C and the Debugger

Recently I have migrated an old project of mine from Objective-C to Swift. It was not that difficult and it worked out fine, and I was eager to see how the performance of the migrated system would turn out. How big was my disappointment when I saw that the Swift version of my system was at least a factor of 10 slower than the Objective-C version with unchanged functionality. That can’t be true, I thought and debugged at little bit.

The system in question is a Call Center Simulator which is heavily using random numbers of specific distributions. It turned out that the biggest difference in performance seemed to occur at these distributed random numbers. Here is the code (Swift and Objective-C) of this little piece of software (part of a struct):

Well, that doesn’t look too dangerous, does it? I isolated that code in a little test. For that I used the same algorithm for the random number generation (Utilities.shared.rndm()) and generated a Maxwell-Boltzmann distribution with 400 bins. I generated 10,000,000 random numbers with that distribution and measured the time needed on my iMac 2019. Here are the results:

Objective-C 9.67 s
Swift 590.81 s

OK, that explains my disappointment; the performance of this little code seems to be in Swift a factor of 61 lower than in Objective-C. Too bad. Well, let’s double check that result with the release version of my test project (debug off).

And, surprise, I got the following results:

Objective-C 6.50 s
Swift 1.87 s

The trend turned into the opposite. Now the Swift version is a factor of 3.5 fasten than the Objective-C version. The debugger seems to have a much bigger influence on the performance in Swift compared to the Objective-C world. Going back to my original project I observed a similar behaviour in turning off the debugger. That’s fine.

So don’t forget to turn off the debugger when measuring Swift performance.

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht.