Notice: _filter_block_template_part_area(): "sidebar" is not a supported wp_template_part area value and has been added as "uncategorized". in /home/ntsnews/public_html/wp-includes/functions.php on line 6131

Notice: _filter_block_template_part_area(): "sidebar" is not a supported wp_template_part area value and has been added as "uncategorized". in /home/ntsnews/public_html/wp-includes/functions.php on line 6131
A deep reinforcement learning framework for influence max... - NTS News

A deep reinforcement learning framework for influence max…

Recently, the Influence Maximization (IM) problem has gained significant prominence in both industrial applications and academic research. The dissemination of information on social influence maximization has attracted significant attention from both industri…

Recently, the Influence Maximization (IM) problem has gained significant prominence in both industrial applications and academic research. The dissemination of information on social influence maximization has attracted significant attention from both industries and academia. Since the exponential expansion of network data and the increasing intricacy of application scenarios, traditional approximation algorithms face substantial challenges, including insufficient theoretical guarantees, suboptimal empirical performance, and prohibitive computational costs.

Although several deep learning-based algorithms have emerged recently, they often exhibit limited generalizability when applied to heterogeneous large-scale networks. To tackle this issue, in this paper we introduce a Deep Reinforcement Learning framework, MaDGNN, to solve the Influence Maximization problem on large-scale social networks via graph neural network and reinforcement learning. Hence, our algorithm can be applied to different types of networks.

Extensive empirical experiments in real-world scenarios demonstrate that our model surpass baseline methods by a significant margin across diverse datasets. Furthermore, the proposed algorithm trained on smaller graphs exhibits excellent scalability to larger graphs. Linden, G., Smith, B. & York, J. Amazon.com recommendations: Item-to-item collaborative filtering. IEEE Internet Comput. 7, 76–80 (2003).

Leskovec, J., Adamic, L. A. & Huberman, B. A. The dynamics of viral marketing. ACM Trans. Web (TWEB) 1, 5 (2007). Kempe, D., Kleinberg, J. & Tardos, É. Maximizing the spread of influence through a social network. In Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, 137–146 (2003). Chen, W., Wang, C. & Wang, Y. Scalable influence maximization for prevalent viral marketing in large-scale social networks.

In Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining, 1029–1038 (2010). Chen, W., Yuan, Y. & Zhang, L. Scalable influence maximization in social networks under the linear threshold model. In 2010 IEEE international conference on data mining, 88–97 (2010). Borgs, C., Brautbar, M., Chayes, J. & Lucier, B. Maximizing social influence in nearly optimal time.

In Proceedings of the twenty-fifth annual ACM-SIAM symposium on Discrete algorithms, 946–957 (2014). Tang, Y., Xiao, X. & Shi, Y. Influence maximization: Near-optimal time complexity meets practical efficiency. In Proceedings of the 2014 ACM SIGMOD international conference on Management of data, 75–86 (2014). Tang, Y., Shi, Y. & Xiao, X. Influence maximization in near-linear time: A martingale approach.

In Proceedings of the 2015 ACM SIGMOD international conference on management of data, 1539–1554 (2015). Nguyen, H. T., Thai, M. T. & Dinh, T. N. Stop-and-stare: Optimal sampling algorithms for viral marketing in billion-scale networks. In Proceedings of the 2016 international conference on management of data, 695–710 (2016). Tang, J., Tang, X., Xiao, X. & Yuan, J. Online processing algorithms for influence maximization.

In Proceedings of the 2018 international conference on management of data, 991–1005 (2018). Khalil, E., Dai, H., Zhang, Y., Dilkina, B. & Song, L. Learning combinatorial optimization algorithms over graphs. Adv. Neural Inf. Process. Syst. 30 (2017). Manchanda, S. et al. Gcomb: Learning budget-constrained combinatorial algorithms over billion-sized graphs. Adv. Neural. Inf. Process. Syst. 33, 20000–20011 (2020).

Li, H. et al. Piano: Influence maximization meets deep reinforcement learning. IEEE Trans. Comput. Soc. Syst. 10, 1288–1300 (2022). Chen, T., Yan, S., Guo, J. & Wu, W. Touplegdd: A fine-designed solution of influence maximization by deep reinforcement learning. IEEE Trans. Comput. Soc. Syst. 11, 2210–2221 (2023). Vieillard, N., Pietquin, O. & Geist, M. Munchausen reinforcement learning. Adv. Neural.

Inf. Process. Syst. 33, 4235–4246 (2020). Rossi, R. & Ahmed, N. The network data repository with interactive graph analytics and visualization. In Proceedings of the AAAI conference on artificial intelligence, vol. 29 (2015). Goyal, A., Lu, W. & Lakshmanan, L. V. Celf++ optimizing the greedy algorithm for influence maximization in social networks. In Proceedings of the 20th international conference companion on World wide web, 47–48 (2011).

Xia, W., Li, Y., Wu, J. & Li, S. Deepis: Susceptibility estimation on social networks. In Proceedings of the 14th ACM International Conference on Web Search and Data Mining, 761–769 (2021). Teng, X., Zhong, M. & Liu, J. A self-supervised heterogeneous graph attention model based on adaptable step-size metapaths. IEEE Trans. Neural Netw. Learn. Syst. 1–15 (2025). Mo, S., Teng, X., Wu, K., Liu, J.

& Yuan, K. A universal subhypergraph-assisted embedding framework for both homogeneous and heterogeneous networks. IEEE Trans. Knowl. Data Eng. 37, 4935–4947 (2025). Hamilton, W., Ying, Z. & Leskovec, J. Inductive representation learning on large graphs. Adv. Neural Inf. Process. Syst. 30 (2017). Perozzi, B., Al-Rfou, R. & Skiena, S. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, 701–710 (2014).

Grover, A. & Leskovec, J. node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, 855–864 (2016). Qiu, J. et al. Deepinf: Social influence prediction with deep learning. In Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery and data mining, 2110–2119 (2018). Clevert, D.-A., Unterthiner, T.

& Hochreiter, S. Fast and accurate deep network learning by exponential linear units (elus). arXiv preprint arXiv:1511.07289 (2015). Fortunato, M. et al. Noisy networks for exploration. arXiv preprint arXiv:1706.10295 (2017). Ziebart, B. D. Modeling purposeful adaptive behavior with the principle of maximum causal entropy (Carnegie Mellon University, 2010). Haarnoja, T., Zhou, A., Abbeel, P. & Levine, S.

Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In International conference on machine learning, 1861–1870 (2018). Fan, J., Wang, Z., Xie, Y. & Yang, Z. A theoretical analysis of deep q-learning. In Learning for dynamics and control, 486–489 (PMLR, 2020). Melo, F. S. & Ribeiro, M. I. Convergence of q-learning with linear function approximation.

In 2007 European control conference (ECC), 2671–2678 (IEEE, 2007). Nemhauser, G. L., Wolsey, L. A. & Fisher, M. L. An analysis of approximations for maximizing submodular set functions-i. Math. Program. 14, 265–294 (1978). This work was supported in part by the National Natural Science Foundation of China under Grants 62306224 and 62471371, in part by the Guangzhou Basic and Applied Basic Research Foundation under Grant SL2022A04J00891, and in part by the Guangdong High-level Innovation Research Institution Project under Grant 2021B0909050008.

Yang presented the motivation for the article and was responsible for the experimental code. Shu and Wang both provided the idea and design for this article. Wang and Du drafted the first draft of the article and the drawing of charts. Teng and Liu reviewed and revised the paper. Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

To facilitate the reproducibility of our results, we present a consolidated summary of the hyperparameter settings used in MaDGNN in Table 10. Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material.

You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/. Yang, F., Wang, Y., Shu, N. et al. A deep reinforcement learning framework for influence maximization problem on large-scale social networks. Sci Rep (2026). https://doi.org/10.1038/s41598-026-41731-9

Summary

This report covers the latest developments in artificial intelligence. The information presented highlights key changes and updates that are relevant to those following this topic.


Original Source: Nature.com | Author: Fang Yang, Yifan Wang, Nina Shu, Xingkui Du, Xiangyi Teng, Jing Liu | Published: March 1, 2026, 12:00 am

Leave a Reply