Project 2 – Chat GPT API app

By Arkadiusz Kulpa, API tests: here API tests v2: here

Applicability of an AI Chat Assistant in resolving knowledge compartmentalization within a workplace

Executive Summary

This report aims to investigate whether AI Innovations can help manage complex operations of a company suffering from compartmentalized knowledge, where various teams need to rely on the broader business for information and guidance which is hard to come by individual specialists. An efficient and scalable cloud LLM application is proposed, where staff could query and receive responses that are grounded in facts derived from company’s data to which the GPT model has access.

Such innovations will necessarily be applied during the company’s ongoing transformation, part of which is to utilise AWS Cloud services and so the artifact developed in the second part of this report showcases a scalable AWS LightSail Instance providing a cost-effective VPS for hosting application’s WordPress front-end and PHP Back-end server. AWS API Gateways communicate with AWS Lambda serverless function which stores the OpenAI API Keys and communicates with the model, response fetched is then sent back to the front-end showing the entire end-to-end process required to host a secure LMM app over Cloud Technologies.

Data Security and Privacy is considered should OpenAI become a third-party data processor for the company and a Document store, LangChain, is outlined as a framework that could facilitate tokenization of company data and storage in a Vector Store, so that the query sent to OpenAI API contains relevant snippets of documents in the form of word embeddings and a simple demonstration of Chat GPT 4.0 Document reading capability is shown as an example of typical Use Case within the company.
This project demonstrates that LMMs work very well with cloud technologies and can be implemented with ease, providing access to cutting-edge technologies to any employee regardless of their technological acumen.