ChatGPT4 Best Guide for Prompts
After the release in this year, ChatGPT4 has received widespread attention worldwide for its outstanding performance. Users are also beginning to accumulate experience in how to write Prompts while using them. Countless experiments have proven that excellent Prompts can achieve excellent output results, and this viewpoint has also been officially recognized by OpenAI.
To provide users with a better experience of artificial intelligence, OpenAI has released GPT Best Guide, imparting the experience of writing Prompts to users from an official perspective. The various experiences summarized by previous users have been integrated and certified by OpenAI as strategies that can effectively achieve better results.
Six Strategies for Better Prompts
OpenAI has listed six strategies in ChatGPT4 Best Guide, which are:
- Write clear instructions.
- Request GPT to provide reference.
- Split complex tasks into simple tasks.
- Provide additional thinking time.
- Use external tools.
- Conduct systematic testing.
These six strategies can improve the output of ChatGPT4. Although the official does not describe whether these strategies can be applied to other ChatGPT models, analyzing the information behind these strategies suggests that they should also be valuable for ChatGPT 3.5 and others. We analyze the six strategies mentioned in ChatGPT4 Best Guide separately.
Write Clear Instructions
Although ChatGPT has high intelligence, its output relies on input information provided by users. In practical use, users may find that GPT4 provides relatively simple results, and often require more word count and detailed information in input. In this case, OpenAI suggests that users explicitly propose more detailed requirements to reduce GPT4’s speculation about the content. If the user needs a specific output format, they can state it in Prompts.
Request GPT to Provide Reference
Since the advent of GPT, many researchers have found that ChatGPT may write false answers or even reference for academic questions or relatively esoteric topics. If users are not familiar with this information, they may not be able to identify the issues. OpenAI suggests that users request GPT to provide reference for the generated content in Prompts to reduce false output.
Split Complex Tasks into Simple Tasks
When performing complex tasks with multiple steps, there may be errors in the computational processing of GPT. This is more evident in the calculation of mathematical problems. To obtain the correct results, users can divide complex tasks into simple tasks and require GPT to calculate step by step. This construction method can improve the accuracy of the output.
Provide Additional Thinking Time
Like performing complex tasks, when GPT is required to answer questions immediately, it may result in inference errors. Users can request the model to develop a solution first and spend enough time searching for answers. In addition, users can also inquire about any missing content to help GPT find the correct idea. This strategy can be run in conjunction with the strategy of “splitting complex tasks into simple tasks”, providing GPT with simpler tasks and more sufficient time.
Use External Tools
To improve the work efficiency of GPT4 and help it solve some mathematical and programming problems, OpenAI has opened the plugin system. Users can utilize these external tools to compensate for the weaknesses of GPT. This strategy can also expand the application space of GPT4 in various fields.
Conduct Systematic Testing
Although users can improve the output capability of GPT through various strategies, changes in these capabilities may only be applicable to very few examples. When users propose enough Prompts, the overall output efficiency may decrease. Therefore, to ensure that the modification strategy is universally effective, users need to conduct system testing (such as through some eval plugins) to confirm that GPT4 has indeed improved output capability in a wide range of tasks.
Reference: