Generative AI

Component of Generative AI:

Prompt Engineering: the skill of asking the machine the right questions ā€” AI will be able to produce very relevant and meaningful content that humans will only need to edit somewhat before they can put it to use.

The ability to quickly retrieve, contextualize, and easily interpret knowledge may be the most powerful business application of large-language models

This engineering discipline has emerged as a game-changing field that enables you to create awe-inspiring AI applications with ease leveraging the capabilities of LLMs, empowers to generate compelling text, translate languages seamlessly, craft captivating content, and obtain informative answers to your queries

Low-level form of generative AI –> Auto Complete – suggests what the remainder of the word or sentence you’re typing might be

ELIZA (MIT-1966)  — ChatGPT — DALL-E — Stable Diffusion —

What is behind Generative AI –> ML is used to process very large amount (in Petabyte scale) of visual or/and textual data scraped from internet and determine what most things likely to appear near other things

Models are developed to accommodate all the scrapped data (data parsing is also considered) collected is Training through Data indexing

One of the Technique that comes to play is Transformer that derives meaning from long sequences of text to understand how different words or semantic components relate to each other (vectorization stored in Vector Database (Milvus, Weaviate, Pinecone, Qdrant (SE), Vespa (SE), Zilliz – AWS Cloud Native) with connectors like CosmosDB, Postgres,

Tools : Gradio (Limited Customization) Streamlit (More customizations of user interface) are open source python package that helps in depicting the components of the ML model or an API. Enlighten the experience of the coders to business consultants to get the feel of the thinking process by integrating and sharing through the Anaconda’s Jupyter notebook

Use Case#1

Enterprise Search To Gen- AI–> It is provisioned to the operating keyword – Embeddings – as each phrase, sentence data in a document encoded as embedding into n Dimensional space .

Embeddings corresponding to users search query encoded and is mapped to the nearest neighbours of the embeddings space created in the n-Dimensional space stored in the form of Vectors from Vectors DB and show the results LLMs provided sophisticated embeddings – Vector Matching Engine.

The much popularized google search criteria provided to the consumers offering is now leveraged to the enterprises.

LLMs need not know all the 100 Billion Params but 20Billion Params models would be sufficient to solve the use case so as to avoid one size fit all problems as well as to minimize the inference cost (variation of PALM models – Higher Parameter size is directly proportional to higher Inference Cost)

Propensity Analysis Strategy enables to get the list of Focus and highly motivated accounts by verticals to penetrate and work with the clients to differentiate with sell To and Sell With partner organizations (to get better synergies), Generally it is the responsibility of the Cloud Vendor Business Groups once any organization has close partnernhip with the Cloud accounts