AI news
January 27, 2024

GPTStore: On the Cusp of an App Renaissance or Just a Blip in the GPT Timeline?

Anyone can now build apps in natural language with GPTs.

Roshni Ramnani
by 
Roshni Ramnani

This week, OpenAI announced the upcoming launch of their ‘GPTStore.’ In case you missed it — OpenAI released the ability to create custom GPTs, allowing the creation of application-specific bots on top of GPT-4. This is achieved simply by using a set of specialized prompts and custom data. This development enables anyone to create applications for the enterprise, education, or just for fun. [1]

As a technologist, I am both excited to see the kind of applications this will bring about and circumspect regarding whether people will:

a) Use applications that can be built simply with an advanced prompt, rather than writing the prompt and using ChatGPT;

b) Be willing to pay for such applications (considering OpenAI has promised that the best custom GPTs are monetizable); and most importantly,

c) Prioritize and ensure the safety metrics of the application

For points a) and b), we will simply have to wait and see. So far, the custom GPTs have received some attention, with many creating curated lists of them [4] [5]. I believe we will be able to create sophisticated enough applications using custom data and APIs that people will be willing to pay for. Similarly, for point b), if this is truly the ‘iPhone App Store’ moment many have been anticipating, then the possibilities are immense. The figure below illustrates the revenue of the iPhone App Store between 2017 and 2021

For point c), there are specific challenges related to safety and accuracy. Natural language is inherently ambiguous, unlike code itself, and it stands to reason that an application produced in natural language may yield variable and unexpected results. One perspective on this is that we are dealing with a non-deterministic method of development, and we should acknowledge this challenge and find ways to work around it. Perhaps the ease and speed of development far outway the ambiguity.

Additionally, despite explicit guardrails in the prompts, there is still the possibility that the chatbot could inadvertently produce hate speech, leak confidential data (uploaded during its development), or forget its persona or other specified aspects (included in the prompt description). This could be executed by a clever adversarial user through techniques known as prompt injection. A paper released in November examined 200 custom GPTs created by users and attempted to use prompt injection to extract the instructional prompts and knowledge files uploaded to them. By employing relatively simple prompts, the researchers achieved a 97% success rate in extracting prompts and a 100% success rate in leaking confidential knowledge [2].

I am sure OpenAI is looking into ways to help mitigate the safety issues. In the meantime, here are some steps we can take: 1) disable the code interpreter in the custom GPTs and 2) use well-articulated defensive prompts when building the custom GPTs. The community blog [3] offers some excellent suggestions and insights on this topic, including methods to test your prompts against adversarial attacks. For my part, I will definitely be trying out the new customGPT’s and building a few myself.

References:

[1] https://openai.com/blog/introducing-gpts

[2] Yu, Jiahao, et al. “Assessing Prompt Injection Risks in 200+ Custom GPTs.” arXiv preprint arXiv:2311.11538 (2023).

[3] How can you protect your GPT? — GPT builders / Plugins / Actions builders — OpenAI Developer Forum

[4] https://writesonic.com/blog/best-custom-gpts#:~:text=AI%20Cooking%20Assistant%20GPT,be%20your%20go%2Dto%20tool.

[5] https://github.com/devisasari/awesome-chatgpt-store

Get your brand or product featured on Jim Monge's audience