Proceedings of International Conference on Applied Innovation in IT
2025/12/22, Volume 13, Issue 5, pp.187-191
A Survey of Large and Small Language Models
Bojana Velichkovska, Jasmina Angelevska Kostadinoska, Dushko Stavrov and Goran Jakimovski Abstract: The design and deployment of language models (LMs) is a significant advancement in the field of natural language processing (NLP). Their strong performance in understanding, interpreting, and generating human language has enabled their applicability in different fields, among which are some extremely sensitive areas as healthcare assistants, drug discovery and medical research, and education. With their potential impact, it is important to understand the way LMs operate and ensure their quality in undertaking these challenges. As such, two different approaches have emerged with specific applicability, namely large (LLMs) and small language models (SLMs). While LLMs have demonstrated high accuracy on generalized knowledge, SLMs have evolved to offer specialized insight for specific domains. Both LMs offer different benefits to their respective users. This study defines and examines the applicability of LLMs and SLMs, names the foundational research behind both, and presents a comparative analysis of the key architectural and methodological innovations and characteristics that have influenced their development, with focus on the newest and most used LMs.
Keywords: Artificial Intelligence, Natural Language Processing, Large Language Models, Small Language Models.
DOI: 10.25673/122852
Download: PDF
References:
- P. Johri, S. K. Khatri, A. T. Al-Taani, M. Sabharwal, S. Suvanov, and A. Kumar, “Natural Language Processing: History, Evolution, Application, and Future Work,” Lecture Notes in Networks and Systems, pp. 365–375, 2021, doi: 10.1007/978-981-15-9712-1_31.
- B. Ghojogh and A. Ghodsi, “Recurrent Neural Networks and Long Short-Term Memory Networks: Tutorial and Survey,” arXiv, Apr. 22, 2023. Available: https://arxiv.org/abs/2304.11461.
- A. Vaswani et al., “Attention Is All You Need,” arXiv, Jun. 12, 2017. Available: https://arxiv.org/abs/1706.03762.
- S. Minaee et al., “Large Language Models: A Survey,” arXiv, Feb. 2024, doi: 10.48550/arxiv.2402.06196.
- J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding,” arXiv, Oct. 11, 2018. Available: https://arxiv.org/abs/1810.04805.
- R. He, A. Ravula, B. Kanagal, J. Ainslie, and G. Research, “RealFormer: Transformer Likes Residual Attention,” Available: https://arxiv.org/pdf/2012.11747.
- T. B. Brown et al., “Language Models Are Few-Shot Learners,” arXiv, vol. 4, no. 33, May 2020. Available: https://arxiv.org/abs/2005.14165.
- OpenAI, “GPT-4 Technical Report,” arXiv:2303.08774, Mar. 2023, doi: 10.48550/arXiv.2303.08774.
- S. Wang, M. Hu, Q. Li, M. Safari, and X. Yang, “Capabilities of GPT-5 on Multimodal Medical Reasoning,” arXiv, 2025. Available: https://arxiv.org/abs/2508.08224.
- A. Chowdhery et al., “PaLM: Scaling Language Modeling with Pathways,” arXiv:2204.02311, Apr. 2022. Available: https://arxiv.org/abs/2204.02311.
- H. Touvron et al., “LLaMA: Open and Efficient Foundation Language Models,” arXiv:2302.13971, Feb. 2023.
- R. Anil et al., “Gemini: A Family of Highly Capable Multimodal Models,” arXiv, Dec. 18, 2023. Available: https://arxiv.org/abs/2312.11805.
- DeepSeek-AI et al., “DeepSeek-V3 Technical Report,” arXiv, 2024. Available: https://arxiv.org/abs/2412.19437.
- Anthropic, “System Card: Claude Opus 4 & Claude Sonnet 4,” 2025. Available: https://www-cdn.anthropic.com/6d8a8055020700718b0c49369f60816ba2a7c285.pdf.
- V. Nguyen et al., “A Survey of Small Language Models,” arXiv, 2024. Available: https://arxiv.org/abs/2410.20011.
- V. Sanh, L. Debut, J. Chaumond, and T. Wolf, “DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter,” arXiv, 2019. Available: https://arxiv.org/abs/1910.01108.
- Z. Lan, M. Chen, S. Goodman, K. Gimpel, P. Sharma, and R. Soricut, “ALBERT: A Lite BERT for Self-supervised Learning of Language Representations,” arXiv:1909.11942, Feb. 2020. Available: https://arxiv.org/abs/1909.11942.
- X. Jiao et al., “TinyBERT: Distilling BERT for Natural Language Understanding,” arXiv:1909.10351, Oct. 2020. Available: https://arxiv.org/abs/1909.10351.
- Z. Sun, H. Yu, X. Song, R. Liu, Y. Yang, and D. Zhou, “MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices,” arXiv:2004.02984, Apr. 2020. Available: https://arxiv.org/abs/2004.02984.
- W. Wang, F. Wei, L. Dong, H. Bao, N. Yang, and M. Zhou, “MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers,” arXiv:2002.10957, Apr. 2020. Available: https://arxiv.org/abs/2002.10957.
- S. Iyer et al., “OPT-IML: Scaling Language Model Instruction Meta Learning through the Lens of Generalization,” arXiv, Dec. 2022, doi: 10.48550/arxiv.2212.12017.
- “Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs,” Databricks, May 05, 2023. Available: https://www.databricks.com/blog/mpt-7b.
- A. Hughes, “Phi-2: The surprising power of small language models,” Microsoft Research, Dec. 12, 2023. Available: https://www.microsoft.com/en-us/research/blog/phi-2-the-surprising-power-of-small-language-models/.
- Gemma Team et al., “Gemma: Open Models Based on Gemini Research and Technology,” arXiv, Mar. 13, 2024. Available: https://arxiv.org/abs/2403.08295.
- A. L. Ben et al., “SmolLM2: When Smol Goes Big — Data-Centric Training of a Small Language Model,” arXiv, 2025. Available: https://arxiv.org/abs/2502.02737.
- A. Q. Jiang et al., “Mistral 7B,” arXiv, Oct. 10, 2023. Available: https://arxiv.org/abs/2310.06825.
- J. Camilo, “Efficient Strategy for Improving Large Language Model (LLM) Capabilities,” arXiv, 2025. Available: https://www.arxiv.org/abs/2508.04073.
- L. Huang et al., “A Survey on Hallucination in Large Language Models: Principles, Taxonomy, Challenges, and Open Questions,” arXiv, Nov. 2023, doi: 10.48550/arxiv.2311.05232.
- R. Ranjan, S. Gupta, and S. N. Singh, “A Comprehensive Survey of Bias in LLMs: Current Landscape and Future Directions,” arXiv, 2024. Available: https://arxiv.org/abs/2409.16430.
|

HOME

- Conference
- Journal
- Paper Submission to Conference
- Paper Submission to Journal
- Fee Payment
- For Authors
- For Reviewers
- Important Dates
- Conference Committee
- Editorial Board
- Reviewers
- Last Proceeding

PROCEEDINGS
-
Volume 13, Issue 5 (ICAIIT 2025)
-
Volume 13, Issue 4 (ICAIIT 2025)
-
Volume 13, Issue 3 (ICAIIT 2025)
-
Volume 13, Issue 2 (ICAIIT 2025)
-
Volume 13, Issue 1 (ICAIIT 2025)
-
Volume 12, Issue 2 (ICAIIT 2024)
-
Volume 12, Issue 1 (ICAIIT 2024)
-
Volume 11, Issue 2 (ICAIIT 2023)
-
Volume 11, Issue 1 (ICAIIT 2023)
-
Volume 10, Issue 1 (ICAIIT 2022)
-
Volume 9, Issue 1 (ICAIIT 2021)
-
Volume 8, Issue 1 (ICAIIT 2020)
-
Volume 7, Issue 1 (ICAIIT 2019)
-
Volume 7, Issue 2 (ICAIIT 2019)
-
Volume 6, Issue 1 (ICAIIT 2018)
-
Volume 5, Issue 1 (ICAIIT 2017)
-
Volume 4, Issue 1 (ICAIIT 2016)
-
Volume 3, Issue 1 (ICAIIT 2015)
-
Volume 2, Issue 1 (ICAIIT 2014)
-
Volume 1, Issue 1 (ICAIIT 2013)

LAST CONFERENCE
ICAIIT 2026
-
Photos
-
Reports
PAST CONFERENCES
ETHICS IN PUBLICATIONS
ACCOMODATION
CONTACT US
|
|