A Deep Dive into Prompt Engineering
The last few installments of this blog have been quite generically focused on the concept of AI prompts for e-commerce and optimization-related results. But, as we already know, gen AI can be used for almost anything and everything. Given the significant spike in the actual use of gen AI, a focus on gen AI-related issues, such as prompts for success, platforms, models to use and even things like the reliability of generative results, has also gained a lot of momentum. Most significant in this space is the creation of a prompt to put all prompts to bed. Does this actually exist? Can you ever truly craft an exceptionally perfect prompt? And how do you know if a prompt is useful or not, given that the generated results seem mostly fine anyway? Let's try to tackle a few of these ideas.
For this blog specifically, we'll focus on some of the best practices for prompting based on OpenAI's ChatGPT. A great starting point for beginners is the official OpenAI prompt engineering guide, which offers a lot of great prompting tips and tricks. The first step before diving into the curation of a prompt is to discuss the model. Using the latest GPT model usually offers more accurate results given that the training data is very recent and is consistently updated with improvements to the model based on glitches from prior ones. However, given that these models can sometimes be more pricey, if using ChatGPT for personal use, a great recommendation is GPT 4.1 which has a great combination of intelligence, speed and cost effectiveness.
One of the most common best practices for prompting suggests making a prompt as clear and easy to understand as possible -- not just with word choices but also layout and how certain items are positioned. Have an instruction at the beginning of the prompt and then include the example or the text you need AI to assist with. But don't just stop there; label that as the text or the example so it's clear that it is not included in the prompt. An example of this from OpenAI's best practices guide is shown below:
The importance of being specific when curating your prompts cannot be overemphasized. Things like specifying the length, tone, style and context can mean the difference between an OK result and a mostly great result with only minimal changes required. Another thing to note is that there is no specific rule on whether zero-shot or few-shot works better. Zero-shot prompts are straightforward and simple, they don't include any examples or demonstrations that can "help" the AI understand better. Few-shot prompts, however, provide context and demonstrate to the model what is required. These types of prompting techniques are not exhaustive of course, many others exist and are consistently being discovered as well. You can learn more about other prompting techniques here. Almost always, a few-shot will be what you end up using, given that ChatGPT and most other gen AI platforms often suggest follow-up items that can be done, which in a way prompts you. It can be easier to start with zero-shot and then fine-tune from there if the result generated is not up to scratch. OpenAI suggests four major parts to fine-tuning that mainly include re-evaluating examples and improving on your original data set for a better result, also known as supervised fine-tuning.
[Click on image for larger view.]
Example of supervised fine-tuning.
Another great tip from OpenAI for optimal results with ChatGPT is to be precise and avoid negations. As linguists usually suggest in plain language guidelines, it's always much clearer to say what to do rather than what not to do. Removing negations from your prompt can enhance its quality manifold. Along with this, using nudging words is also a way to make your prompt more useful towards a higher quality output.
Keeping your prompt style and layout exactly the same for every project will not necessarily guarantee ideal results. Sometimes a prompt in one style can produce excellent results, while a prompt in a very similar style but asking for something else will produce a much less useful result. This is due to a few factors such as the available data on the topic, the timeline of the prompt (sometimes things that are more recent can have better results as opposed to something that took place many years ago because this data may not be digital), the type of language used in the prompt (certain culturally-specific words can be less easily understood at times) or even something that could be classified as being harmful according to the guardrails outlined by the model's rules.
For instance, prompting ChatGPT to "generate an image of a dog setting a house on fire" is deemed as dangerous and an attempt at arson. This does not mean that there is anything wrong with your prompt, because you could use the very same outline of a prompt to say "generate an image of a house on fire," which works perfectly. The prompt style is exactly the same; the content just differs slightly, which affects the way the model understands the implications.
[Click on image for larger view.]
Screenshot of a ChatGPT chat.
Last but not least, the concepts of models and temperatures are worth bearing in mind when using any generative AI platform, even if not at a professional level. Earlier on, the concept of the model was discussed in terms of which version of GPT is better suited for your needs. The more recent the model, the better the features for most instances. Temperature is a measure of how often a model outputs a randomized or creative result. The higher the temperature, the more random the output. However, more recent models such as GPT-5 have removed the temperature setting API and set a default temperature of 0.8. OpenAI mentioned that this was done because temperature behaves differently on this model architecture and that users were almost always better served when using a set temperature of 0.8.
Prompting, at its core, remains a dynamic and ever-changing concept that adapts according to users, platforms, models, timelines and language use. Adapting your prompting best practices accordingly and staying in touch with the recent updates and features available can significantly improve the results your prompts generate.
Posted by Ammaarah Mohamed on 01/07/2026