On of network sizes.ResultsThe role of the network mixing parameter on accuracy and computing time. First, we study the accuracy of the community detection algorithms as a function of the mixing parameter . To measure the accuracy we have employed the normalised mutual information, i.e., NMI. This is a measure borrowed from information theory which has been regularly used in papers comparing community detection algorithms13. Defining a confusion matrix N, where the rows correspond to the `real’ communities, and the columns correspond to the `found’ communities. The element of N, Nij, is the number of nodes in the real community i that appear in the j-th detected community. The normalised mutual information is thenI ( , ) = -2C=1C=1Nij log(Nij N /Ni N j ) i j C=1Ni log(Ni /N ) + C=1N j log(N j /N ) j i (2)where the number of communities given by the LFR model is denoted by C and the number of communities detected by the SB 202190 web algorithm is denoted by C . The sum over the i-th row of N is denoted N i and the sum over the j-th column is denoted N j . If the estimated communities are identical to the real ones, I ( , ) equals to 1. If the partition found by the algorithm is totally independent from the real partition, I ( , ) vanishes. As pointed out in ref. 21, the mutual information can be normalised in different ways. These different normalisation methods are sensitive to different partition properties and have different theoretical properties21?3. To get a better overview of the accuracy, we have calculated the NMI by using all these five different definitions (cf. SI). We conclude that in the current study different normalisation procedures provide qualitatively similar behaviours. Just for the sake of brevity, and consistently with Danon et al.8, we report in this section only Isum (i.e. normalisation by the arithmetic mean). The results of the other NMIs are shown in the “Supplementary Information”. The results are shown in Fig. 1. Each panel presents the accuracy of a given community detection algorithm and is subdivided into two plots: The lower axis TAPI-2 dose depict the average value of NMI and the upper ones contain the standard deviation of the measures when repeated over 100 different network realisations. Most of the algorithms can uncover well the communities when the mixing parameter is small, as it is apparent from the large values of I in the limit 0. The accuracy of algorithms decreases, then, with increasing values of both network size and . Different algorithms behave differently: the accuracy of Fastgreedy algorithm decreases monotonically, in a smooth fashion and has a very small standard deviation along all the range (Panel (a), Fig. 1). Whereas that of Leading eigenvector algorithm falls rapidly even with small value of (Panel (c), Fig. 1). All the other algorithms display abrupt changes of behaviour: their performances remain relatively stable before a turning point where the NMI drops very fast as a function of . The changes of behaviour are usually around = 1/2, which corresponds to the strong definition of community16. Interestingly, Label propagation and Edge betweenness algorithms have turning points smaller than said value; while Infomap, Multilevel, Walktrap, and Spinglass algorithms have turning points greater than = 1/2. We have also noticed that for the Infomap algorithm the normalised mutual information has a point of discontinuous behaviour at around ? 0.55. On the other hand, for Label propagation, I vanishes.On of network sizes.ResultsThe role of the network mixing parameter on accuracy and computing time. First, we study the accuracy of the community detection algorithms as a function of the mixing parameter . To measure the accuracy we have employed the normalised mutual information, i.e., NMI. This is a measure borrowed from information theory which has been regularly used in papers comparing community detection algorithms13. Defining a confusion matrix N, where the rows correspond to the `real’ communities, and the columns correspond to the `found’ communities. The element of N, Nij, is the number of nodes in the real community i that appear in the j-th detected community. The normalised mutual information is thenI ( , ) = -2C=1C=1Nij log(Nij N /Ni N j ) i j C=1Ni log(Ni /N ) + C=1N j log(N j /N ) j i (2)where the number of communities given by the LFR model is denoted by C and the number of communities detected by the algorithm is denoted by C . The sum over the i-th row of N is denoted N i and the sum over the j-th column is denoted N j . If the estimated communities are identical to the real ones, I ( , ) equals to 1. If the partition found by the algorithm is totally independent from the real partition, I ( , ) vanishes. As pointed out in ref. 21, the mutual information can be normalised in different ways. These different normalisation methods are sensitive to different partition properties and have different theoretical properties21?3. To get a better overview of the accuracy, we have calculated the NMI by using all these five different definitions (cf. SI). We conclude that in the current study different normalisation procedures provide qualitatively similar behaviours. Just for the sake of brevity, and consistently with Danon et al.8, we report in this section only Isum (i.e. normalisation by the arithmetic mean). The results of the other NMIs are shown in the “Supplementary Information”. The results are shown in Fig. 1. Each panel presents the accuracy of a given community detection algorithm and is subdivided into two plots: The lower axis depict the average value of NMI and the upper ones contain the standard deviation of the measures when repeated over 100 different network realisations. Most of the algorithms can uncover well the communities when the mixing parameter is small, as it is apparent from the large values of I in the limit 0. The accuracy of algorithms decreases, then, with increasing values of both network size and . Different algorithms behave differently: the accuracy of Fastgreedy algorithm decreases monotonically, in a smooth fashion and has a very small standard deviation along all the range (Panel (a), Fig. 1). Whereas that of Leading eigenvector algorithm falls rapidly even with small value of (Panel (c), Fig. 1). All the other algorithms display abrupt changes of behaviour: their performances remain relatively stable before a turning point where the NMI drops very fast as a function of . The changes of behaviour are usually around = 1/2, which corresponds to the strong definition of community16. Interestingly, Label propagation and Edge betweenness algorithms have turning points smaller than said value; while Infomap, Multilevel, Walktrap, and Spinglass algorithms have turning points greater than = 1/2. We have also noticed that for the Infomap algorithm the normalised mutual information has a point of discontinuous behaviour at around ? 0.55. On the other hand, for Label propagation, I vanishes.