BY NURIN PATRA & AMMAR MIRZA
Artificial intelligence (AI) is increasingly shaping how people learn, communicate and interact with technology but misconceptions about its capabilities remain widespread.
To clarify these misunderstandings, Sarawak Tribune spoke with Faculty of Civil Engineering, Universiti Teknologi MARA (UiTM) Pulau Pinang senior lecturer Dr Suhailah Mohamed Noor, at STSS office in Kuching recently.
An expert in AI, education and ethical technology use, Suhailah shared insights on common misconceptions about AI, the importance of ethical guidance and how students and educators can effectively harness AI through techniques like prompt chaining and the 4Cs of AI literacy which are critical thinking, creativity, collaboration and communication.
Sarawak Tribune: What are the most common misconceptions you’ve observed about AI capabilities in public discussions, particularly in Malaysia or Southeast Asia?
From my experience handling artificial intelligence (AI) training workshops with educators and students, the first common misconception is about the term “AI” itself. The tools we often refer to, such as ChatGPT, Gemini and Copilot, are actually generative AI, which is only a subset of the bigger picture of AI. Normally, when we ask ChatGPT, “What is AI?” it explains AI in general and does not specifically highlight generative AI, which can lead to misunderstanding.
The second misconception concerns prompts. Many people believe prompts need to be highly specific and often confuse them with the term “command prompt.” Command prompts are not related to AI—they are part of programming or coding. A prompt, on the other hand, is simply natural language input for generative AI.
Misunderstandings like this often arise from social media, where people tend to follow what others say without fully understanding it. Prompts do not require perfect commands or sentences; we just need to express what we want, and the AI can respond accordingly. Overall, the two main misconceptions are that AI is often used to refer specifically to generative AI, and that using AI requires “command prompts” rather than natural language prompts.
From your perspective, which sector suffers the most from misunderstanding of AI? And why?

I believe the education sector suffers the most from misunderstandings about AI. AI has actually been around for a long time, since 2018, newspapers were already discussing AI in education but it hadn’t reached end users yet. Back then, the focus was more on robotics, which is related to AI, but still largely unexplored.
AI can be understood in phases. The first is Narrow AI, or Weak AI, like Siri on the iPhone. Siri can only respond based on what it was programmed to do and cannot handle questions outside its design. Other examples include Google Search, Waze and Google Maps. Today, we are in the generative AI phase. Generative AI, like ChatGPT, can communicate naturally with humans and learn from interactions. The more natural the conversation, the better the AI responds. Unlike Narrow AI, generative AI adapts and improves through collaboration with humans, but its intelligence will never surpass human intelligence. Human input is always essential. For the other two phases, we’re not sure whether that time will come or not. General AI is on par with human intelligence. Meanwhile Super AI is like beyond, it surpasses human intelligence.
That is why, any students don’t fully understand generative AI and assume its responses are perfect. This leads to challenges in education because teachers and lecturers also have divided opinions whereby some have never used AI or experienced it themselves. Without understanding, it’s difficult for educators to fairly assess students who use AI. Since there are no formal policies on AI in the syllabus yet, usage depends largely on individual initiative.
How should educators approach the use of AI in teaching and assessment?
Evaluating AI-assisted work requires a different approach. Assignments need to be designed with AI in mind. If ChatGPT can answer a question exactly as intended, that question must be transformed. This approach, which I call AI-aware assessment, ensures learning outcomes remain meaningful even in the age of AI.
Currently, many students use AI without enough human input, and many educators have divided opinions, often based on hearsay rather than experience. Some lecturers who don’t accept AI have never tried it themselves, making it difficult for them to judge students fairly. Policies on AI are still absent and syllabus changes in Malaysia only happen after curriculum reviews, which occur in cycles of about five years. The key is for educators to actively engage with AI, understand its capabilities and limitations and adjust teaching and assessment methods accordingly. This way, learning remains effective and fair, even as AI becomes more widely used.
Evaluating AI-assisted work requires a different approach. Assignments need to be designed with AI in mind. If ChatGPT can answer a question exactly as intended, that question must be transformed. This approach, which I call AI-aware assessment, ensures learning outcomes remain meaningful even in the age of AI.
Earlier, you mentioned the different phases of AI development. Based on your assessment, when do you think the transition from narrow AI to general AI might happen? And at this point in time, do you think we are already approaching the phase of general AI?
It’s hard to say exactly when General AI will arrive because research is still ongoing. Right now, we are not yet capable of achieving it and we don’t fully understand it. For the moment, General AI remains more of a concept or an idea among researchers—it may happen, or it may not. What’s more pressing today is ethics. Students pick up technology very quickly, often without formal teaching, which is why guiding them on ethical use is crucial. When they understand ethics, they can use AI responsibly and effectively.
What is more important at this moment is the real issue we are facing, which is ethics. Ethics is the core concern, and that is actually what my books such as Kuasai Prompt: Seni Memimpin AI Untuk Pelajar IPT and Bijak Kuasai Prompt: Seni Dan Sains Berinteraksi Dengan AI focus on.
Young people learn and adapt with technology naturally quick. That is why constant guidance on ethics is crucial so they use this technology such as AI in a more responsible and ethical way.
How can users get the best results from AI like ChatGPT?
The key is learning how to guide AI. AI doesn’t read minds; it responds to what we provide. That’s why prompting is so important. In my book, Kuasai Prompt: Seni Memimpin AI Untuk Pelajar IPT, I say, “AI will not replace you unless you do not know how to lead it.” Leading AI means guiding the conversation clearly, providing context and interacting step by step. Techniques like prompt chaining, which is a conversational approach, help AI understand tasks more naturally. The more clearly and actively we communicate, the better and more accurate AI’s responses become.
Effective prompting also develops what we call the 4Cs: critical thinking, creativity, collaboration and communication which are the key elements of AI literacy. Critical thinking helps users structure meaningful prompts, creativity allows them to refine follow-up instructions, collaboration ensures continuous interaction with AI and communication guarantees clarity. Combined with ethical guidance and human oversight, these skills enable students and users to make AI a partner in learning rather than a replacement for knowledge. In my book, I connect the definition of AI literacy with the 4Cs, as these skills form a crucial foundation for meaningful technology use. Therefore, 21st-century learning is indeed the backbone of today’s educational curriculum. Therefore, all four skills (4C) are necessary to write effective prompts. All of these elements are also closely connected to AI ethics, the concept of Human-in-the-Loop (HITL), AI-aware education and prompt training.
What are some effective prompting techniques, and how do they help in education?

There are seven main prompting techniques and one of them focuses on conversational interaction. The foundational ones are zero-shot, one-shot and few-shot prompting, where you provide no example, one example, or several examples to guide AI. Advanced techniques include role prompting, purpose prompting, Chain of Thought (CoT) and prompt chaining, which combines multiple techniques in a step-by-step conversational flow. Among these techniques, prompt chaining is essentially a conversational approach. I often describe it as “casual conversation”—the more natural and relaxed the interaction, the better and more natural the AI’s responses become. I often describe it as “casual conversation”—the more natural and relaxed the interaction, the better and more natural the AI’s responses become.
Users should simply express what they want clearly and communicate openly. Some people say AI seems to understand their thoughts, but in reality, it is not reading their minds. It responds based on everything the user shares. Over time, it recognises patterns in the conversation. The responses generated by AI are a reflection of the person writing the prompt. That is why when some people claim that AI tools like ChatGPT are inaccurate or irrelevant, it often reflects how the prompts were written. This is exactly why I emphasise the power of prompting in my book.
How important are human knowledge and clear goals when using AI? Can you give examples?
Prompting requires knowledge. Without knowledge, it is impossible to write an effective prompt. Prompt chaining, in particular, is closely linked to ethical AI usage. It involves stages, where users first provide understanding before requesting outcomes. For example, when writing a report, users should not simply instruct AI to “write a report.” While AI can respond, the result may not be strong or accurate. Instead, users should first create a conversation, provide context and help AI understand the task. In doing so, they are essentially training the AI.
After training, users need to test the AI’s understanding to ensure it aligns with their expectations. If the response is not accurate, further guidance is needed. This process of teaching, testing and refining is essential. AI acknowledged the mistake, apologised, and improved. This demonstrates how Generative AI is programmed, it listens, admits errors and seeks correction. When we assign it a role and guide it properly, it responds accordingly. That is the true nature of generative AI when it is guided through ethical and effective prompting.
AI is a machine that understands terminology very well. The real issue is that we are often the ones who do not fully understand the terms. When we do not know the correct terminology, the process of teaching AI becomes difficult. We end up teaching it only based on our limited understanding—sometimes in a very basic or informal way. As a result, sometimes the AI understands and sometimes it does not. In reality, the failure lies with us when we are not subject matter experts. We are unable to guide AI in the most accurate and precise way. We must acknowledge our own limitations in terms of knowledge and how we use the tool. I personally tested this based on my background in Civil Engineering, particularly structural engineering. One major challenge in this field is drawing structural diagrams for exam questions, which can take days if done manually. In the past, when I tried using AI to generate diagrams, the results were unsatisfactory.
Besides, we cannot succeed with AI without clear goals and proper knowledge, as progress is impossible without a clear objective, even in an era where almost anything is possible. For example is my own experience building a website that was fully functional and professional kind which would normally cost thousands of ringgit to develop, similar to a banking website.
I built it because I had a clear ambition that is to create a website for Prompt Academy, which is essentially my product. I am the founder of Prompt Academy and I wanted a dedicated platform for it. The website, https://promptacademy.com.my/, is publicly accessible, although it has not been updated recently due to my schedule. Throughout the process, I treated ChatGPT as the consultant. I was able to do this because I had a clear goal. While I initially lacked technical ideas and specific knowledge, I began interacting with AI through structured prompt training. This process allowed me to clarify my thinking step by step.
It is not possible to build a website using zero-shot prompting alone. Instructions must be given incrementally. The more precise and focused the instructions, the better AI’s understanding becomes. I asked for guidance continuously — what to do next, what the following step should be and how to proceed.
This reinforces the key principle: we must have a clear destination and sufficient knowledge. For beginners, knowledge does not mean becoming an expert before starting. Learning can take place alongside AI, as tools like ChatGPT can help users expand, elaborate and deepen their understanding.
In the context of education, many students on social media platforms such as TikTok complain that their work is flagged as AI-generated even when it is not. Lecturers, however, justify this by using tools such as Turnitin or ZeroGPT. What is your view on the use of these tools?
This is precisely why students need to become more creative. Recently, a final-year project student whose report was rejected because it was labelled as AI-generated contacted me. He explained that without AI, he had ideas but struggled to express them due to weak English proficiency. Although he understood the subject matter, he could not fully communicate his ideas without AI assistance.
When he used AI, however, his work was rejected. In reality, such decisions often depend on the lecturer’s perspective rather than faculty policy. This highlights the need for creativity. If we do not give AI clear direction, it will respond at its own level, not at the user’s level. Sometimes, AI-generated sentences are immediately recognisable as not being written by a student.
One solution is to intentionally lower the language level, as example, lower to a secondary or even primary school level and then refine it. This is one way students can maintain authenticity. Another approach is to use different tools. For instance, Gemini often produces responses that feel more human-like. In my experience, its output resembles what I would write myself, only more efficiently, and it is sometimes not detected. Students can also replace words, use synonyms and combine multiple tools. This introduces a process. Relying on a single tool results in minimal engagement, but starting with ChatGPT, adjusting the language level, moving to another platform and refining the output creates a meaningful workflow.
In the AI era, the process matters more than the final output. Learning has always been process-based — through practice and experience. Without a process, individuals lose out to those with experience. No single tool can detect everything. Many people confuse similarity index with AI detection. Turnitin, for example, measures both similarity (plagiarism) and AI-generated content. When content is refined across multiple tools, detection becomes more difficult. What truly matters is whether students can justify every sentence—why it exists and why each point is included. If they cannot, it may be considered AI-generated. If they can, even with AI assistance, the ideas remain theirs. This differs from plagiarism detection. In prompt chaining, the ideas originate from the user, while AI serves only to clarify and enhance expression.
In journalism, AI-generated content and images often reveals itself through repetitive patterns and similar leads, how can the seven prompting techniques help prevent this and when does ethics become a concern in media reporting?
The key is to break the pattern by reviewing every sentence critically. AI tends to follow consistent structures, if one paragraph has five sentences, the next often does as well. If one sentence has a specific length, subsequent sentences mirror it. These patterns must be intentionally disrupted. By using prompt chaining with HITL, journalists can actively guide and refine AI output, critically reviewing and disrupting repetitive structures to ensure content remains ethical, contextual and driven by human judgment rather than AI alone.
How does student reliance on AI affect their critical thinking and preparedness for the workforce, and how can HITL and prompt chaining help address this?
Prompt chaining encourages students to think critically—without active thinking, they won’t know what questions to ask. Learning always begins with knowledge. Change must start with educators, not students alone. If educators do not adapt, students will be unprepared for the real-world workforce, and it is unfair to place the responsibility solely on them. Educators need to understand HITL and incorporate it into assessments and courses. When prompt chaining is applied, students are required to engage thoughtfully, not just complete tasks passively. Assignments should be redesigned holistically across courses, not just in a few isolated instances. HITL, combined with prompt chaining, nurtures critical thinking and ensures students actively lead the AI-assisted process. Over time, the seven prompting techniques naturally integrate into human–AI interactions. Knowledge can be gained through reading and AI, but it must always be verified against textbooks and guidance from educators. The ultimate goal is for students to become subject matter experts. Following HITL, my fifth book, AI Dialectics, will explore how prompt chaining meets industry needs in an era of growing AI dependence. For now, the guiding principle remains HITL—human intelligence drives AI. AI is merely a reflection of human intelligence.





