FRAMEWORK KARANGTURI UNTUK PROMPT ENGINEERING: ANALISIS KOMPARATIF DENGAN RTF, COT, DAN REACT PADA MODEL AI GENERATIF
DOI:
https://doi.org/10.53416/stmj.v5i2.353Abstract
This study aims to perform a comparative analysis of four prompt engineering frameworks: KARANGTURI, RTF (Role-Task-Format), CoT (Chain-of-Thought), and ReAct. These frameworks play a crucial role in assisting users in designing effective instructions for Large Language Models (LLMs). A descriptive-comparative approach is employed to examine each framework in terms of structure, focus, complexity, strengths, limitations, and practical application. KARANGTURI, a locally developed framework, consists of four key elements: Character, Summary, Goal, and Constraint. RTF offers a simple structure based on three core components, making it suitable for straightforward tasks. CoT emphasizes step-by-step reasoning and is effective for complex and logical challenges. ReAct integrates reasoning with actions and supports interaction with external tools for advanced tasks. The analysis reveals that the choice of framework depends on task type, complexity level, and the need for reasoning or access to external information. KARANGTURI is viewed as a comprehensive and flexible approach with promising potential, though it requires further empirical validation. The findings are expected to help AI practitioners select the most appropriate prompting strategy based on their specific needs.
References
Widagdo, H. H., & Bakti, C. A. (2021). Aplikasi pengetesan karakter personal berdasarkan metode DISC berbasis web. Sains Teknologi Manajemen Jurnal (STMJ), 2(1), 18-25 https://unkartur.ac.id/journal/index.php/stmj/article/view/15/15
Lukito, D. (2023). Hard skills and soft skills on performance: Influence and application of Bengkulu City Education Service employees. East Asian Journal of Multidisciplinary Research, 2(11), 4695-4710.
Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D. M., Wu, J., Winter, C., ... Amodei, D. (2020). Language models are few-shot learners (Version 4). arXiv. https://arxiv.org/abs/2005.14165v4
Wei, J. et al. (2022). Chain of Thought Prompting Elicits Reasoning. arXiv. https://arxiv.org/abs/2201.11903
Yao, S. et al. (2022). ReAct: Synergizing Reasoning and Acting. arXiv. https://arxiv.org/abs/2210.03629
Vatsal, S., & Dubey, H. (2024). A survey of prompt engineering methods in large language models for different NLP tasks. arXiv. https://arxiv.org/abs/2407.12994
Liu, P. et al. (2023). Pre-train Prompt Tune. arXiv. https://arxiv.org/abs/2107.13586
Gu, J., Han, Z., Chen, S., Beirami, A., He, B., Zhang, G., Liao, R., Qin, Y., Tresp, V., & Torr, P. (2023). A systematic survey of prompt engineering on vision-language foundation models. arXiv. https://arxiv.org/abs/2307.12980
Informasi dari internet, publikasi elektronik
OpenAI. (2023). GPT Best Practices. https://platform.openai.com/docs/guides/gpt-best-practices
Anthropic. (2023). Prompting Claude Guide. https://docs.anthropic.com/claude/docs/prompting-guide
Prompt Engineering Guide (DAIR.AI)
https://github.com/dair-ai/Prompt-Engineering-Guide
Google Cloud. (2024). Prompt Design Guide. https://cloud.google.com/generative-ai/docs/prompt-design
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Science Technology and Management Journal

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.






